A high-frequency trading system uses AtomicInteger for concurrent counter updates. However, performance degrades as more threads access the same counter. What is the primary cause and best solution?
AFalse sharing due to cache line contention; use LongAdder instead
Correct Answer:
A. False sharing due to cache line contention; use LongAdder instead
EXPLANATION
AtomicInteger suffers from false sharing when multiple threads frequently update adjacent memory locations on the same CPU cache line. LongAdder uses striped updates across multiple cells, reducing contention. This is a 2024-25 performance optimization best practice for high-concurrency scenarios.
Consider a ThreadLocal variable initialized in a thread pool executor with 10 threads. If the same thread is reused from the pool for a different task, what is the state of its ThreadLocal variable?
AIt is automatically reset to its initial value
BIt retains the value from the previous task execution
CIt becomes null
DIt throws ThreadLocalException
Correct Answer:
B. It retains the value from the previous task execution
EXPLANATION
ThreadLocal values persist across task executions in the same thread. ThreadPools reuse threads, so previous ThreadLocal values remain unless explicitly removed. This can cause data leakage. Developers must call remove() to clean up.
A multi-threaded application experiences poor performance despite having adequate CPU cores. Code inspection reveals frequent calls to synchronized blocks on shared objects. Which modern Java feature (2024-25) could optimize this without major refactoring?
AVirtual Threads with appropriate lock strategies
BIncreasing thread pool size
CUsing volatile keyword everywhere
DConverting all methods to static
Correct Answer:
A. Virtual Threads with appropriate lock strategies
EXPLANATION
Virtual Threads (Project Loom, stable in Java 21+) allow massive parallelism with lightweight threads, reducing contention overhead. They enable better scaling than increasing pool size. volatile doesn't help with synchronized block contention, and static conversion is inappropriate.
In a producer-consumer scenario using BlockingQueue, what happens when a consumer thread calls take() on an empty queue?
AIt throws NoSuchElementException immediately
BIt blocks until a producer adds an element
CIt returns null
DIt spins in a busy-wait loop
Correct Answer:
B. It blocks until a producer adds an element
EXPLANATION
BlockingQueue's take() method blocks the calling thread until an element becomes available. This is safer and more efficient than busy-waiting. NoSuchElementException is thrown by non-blocking operations like remove().
A developer needs to ensure that exactly 5 threads complete their tasks before proceeding to the next phase. Which synchronization utility is most appropriate?
ASemaphore
BCountDownLatch
CPhaser
DMutex
Correct Answer:
B. CountDownLatch
EXPLANATION
CountDownLatch is designed for one-time barrier scenarios where threads wait for a countdown to reach zero. With an initial count of 5, it perfectly suits this use case. Semaphore is reusable, Phaser handles multiple phases, and Mutex is not a Java class.
Which of the following methods will cause a thread to release all locks it holds while waiting?
Await()
Bsleep()
Cyield()
Djoin()
Correct Answer:
A. wait()
EXPLANATION
The wait() method causes a thread to release all acquired locks and enter a waiting state until notify() or notifyAll() is called. sleep(), yield(), and join() do not release locks.
In the context of Java 21 Virtual Threads, what is a major limitation of traditional threading that Virtual Threads solve?
AVirtual Threads eliminate the need for synchronization
BVirtual Threads allow creating millions of lightweight threads with minimal memory overhead
CVirtual Threads make garbage collection unnecessary
DVirtual Threads automatically detect deadlocks
Correct Answer:
B. Virtual Threads allow creating millions of lightweight threads with minimal memory overhead
EXPLANATION
Virtual Threads are lightweight and can be created in large numbers (millions) with minimal memory footprint, unlike platform threads which are heavy OS-level constructs. This solves scalability issues in high-concurrency applications.
BIt provides optimistic read locks without acquiring write lock
CIt's a replacement for ReentrantLock in all scenarios
DIt guarantees fairness like Fair ReentrantLock
Correct Answer:
B. It provides optimistic read locks without acquiring write lock
EXPLANATION
StampedLock provides optimistic reads that don't require acquiring a lock. If the data changes during an optimistic read, validation fails and a pessimistic lock can be acquired.
What is the output of the following code?
Thread t = new Thread(() -> { throw new RuntimeException("Error"); });
t.setUncaughtExceptionHandler((thread, ex) -> System.out.println("Caught"));
t.start();
ACaught
BRuntimeException is thrown to main thread
CNo output, exception is silently ignored
DCompilation error
Correct Answer:
A. Caught
EXPLANATION
The UncaughtExceptionHandler is invoked when an exception is thrown in a thread and not caught. It will print 'Caught' before the thread terminates.
In a ForkJoinPool, what is the primary advantage over ExecutorService for recursive tasks?
ABetter exception handling
BWork-stealing algorithm for better load balancing
CLower memory footprint
DAutomatic task prioritization
Correct Answer:
B. Work-stealing algorithm for better load balancing
EXPLANATION
ForkJoinPool uses a work-stealing algorithm where idle threads can 'steal' tasks from busy threads' queues, providing better load balancing for divide-and-conquer problems.