A high-frequency trading system uses AtomicInteger for concurrent counter updates. However, performance degrades as more threads access the same counter. What is the primary cause and best solution?
AFalse sharing due to cache line contention; use LongAdder instead
Correct Answer:
A. False sharing due to cache line contention; use LongAdder instead
EXPLANATION
AtomicInteger suffers from false sharing when multiple threads frequently update adjacent memory locations on the same CPU cache line. LongAdder uses striped updates across multiple cells, reducing contention. This is a 2024-25 performance optimization best practice for high-concurrency scenarios.
Consider a ThreadLocal variable initialized in a thread pool executor with 10 threads. If the same thread is reused from the pool for a different task, what is the state of its ThreadLocal variable?
AIt is automatically reset to its initial value
BIt retains the value from the previous task execution
CIt becomes null
DIt throws ThreadLocalException
Correct Answer:
B. It retains the value from the previous task execution
EXPLANATION
ThreadLocal values persist across task executions in the same thread. ThreadPools reuse threads, so previous ThreadLocal values remain unless explicitly removed. This can cause data leakage. Developers must call remove() to clean up.
In the context of Java 21 Virtual Threads, what is a major limitation of traditional threading that Virtual Threads solve?
AVirtual Threads eliminate the need for synchronization
BVirtual Threads allow creating millions of lightweight threads with minimal memory overhead
CVirtual Threads make garbage collection unnecessary
DVirtual Threads automatically detect deadlocks
Correct Answer:
B. Virtual Threads allow creating millions of lightweight threads with minimal memory overhead
EXPLANATION
Virtual Threads are lightweight and can be created in large numbers (millions) with minimal memory footprint, unlike platform threads which are heavy OS-level constructs. This solves scalability issues in high-concurrency applications.
BIt provides optimistic read locks without acquiring write lock
CIt's a replacement for ReentrantLock in all scenarios
DIt guarantees fairness like Fair ReentrantLock
Correct Answer:
B. It provides optimistic read locks without acquiring write lock
EXPLANATION
StampedLock provides optimistic reads that don't require acquiring a lock. If the data changes during an optimistic read, validation fails and a pessimistic lock can be acquired.
What is the output of the following code?
Thread t = new Thread(() -> { throw new RuntimeException("Error"); });
t.setUncaughtExceptionHandler((thread, ex) -> System.out.println("Caught"));
t.start();
ACaught
BRuntimeException is thrown to main thread
CNo output, exception is silently ignored
DCompilation error
Correct Answer:
A. Caught
EXPLANATION
The UncaughtExceptionHandler is invoked when an exception is thrown in a thread and not caught. It will print 'Caught' before the thread terminates.
In a ForkJoinPool, what is the primary advantage over ExecutorService for recursive tasks?
ABetter exception handling
BWork-stealing algorithm for better load balancing
CLower memory footprint
DAutomatic task prioritization
Correct Answer:
B. Work-stealing algorithm for better load balancing
EXPLANATION
ForkJoinPool uses a work-stealing algorithm where idle threads can 'steal' tasks from busy threads' queues, providing better load balancing for divide-and-conquer problems.
Which scenario can lead to livelock in multithreading?
ATwo threads continuously change state in response to each other but never make progress
BOne thread blocks another thread indefinitely
CMultiple threads access the same resource without synchronization
DA thread is waiting for a resource held by another waiting thread
Correct Answer:
A. Two threads continuously change state in response to each other but never make progress
EXPLANATION
Livelock occurs when threads are not blocked but continuously change state in response to each other (like two people trying to pass each other), preventing progress. This differs from deadlock where threads are blocked.
What is the behavior of ReentrantReadWriteLock when multiple threads perform read operations?
AOnly one thread can read at a time
BMultiple threads can read simultaneously, but writing locks them out
CThreads must alternate between read and write operations
DAll threads must wait for exclusive lock
Correct Answer:
B. Multiple threads can read simultaneously, but writing locks them out
EXPLANATION
ReentrantReadWriteLock allows multiple threads to acquire the read lock simultaneously, but only one thread can hold the write lock. This improves concurrency for read-heavy workloads.
Consider a scenario with 3 threads updating a shared counter. Which synchronization mechanism is MOST efficient?
Asynchronized block on counter
BAtomicInteger
CReentrantLock
DSemaphore with permits=1
Correct Answer:
B. AtomicInteger
EXPLANATION
AtomicInteger uses lock-free CAS operations which are more efficient than synchronized blocks or locks for simple counter operations with multiple threads.