int arr[] = {10, 20, 30};
int *ptr = arr;
printf("%d", *(ptr + 1));
ptr points to arr[0]. ptr + 1 points to arr[1]. *(ptr + 1) dereferences to get the value at arr[1] which is 20. Pointer arithmetic adds sizeof(int) to the address for each increment.
strcpy() does not perform bounds checking. If the source string is longer than the destination buffer, it will write beyond the buffer boundary, causing a buffer overflow. This is a security vulnerability. Using strncpy() is safer.
int x = 5;
int *ptr = &x;
printf("%d %d", *ptr, x);
*ptr dereferences the pointer to access the value at the address it points to, which is x = 5. So both *ptr and x print 5.
Both A and B are correct. Option A uses explicit type casting which is optional in C (not in C++). Option B avoids casting. Using sizeof(int) is preferred over hardcoding 4, as int size may vary across systems.
scanf() reads formatted input based on format specifiers and stops at whitespace. gets() reads a string until a newline is encountered. Note: gets() is deprecated due to buffer overflow vulnerabilities.
int x = 5;
int y = ++x + x++;
printf("%d", y);
This code exhibits undefined behavior because x is modified twice between sequence points without an intervening sequence point. The result depends on the compiler's implementation.
A Heap (either Min-Heap or Max-Heap) is the optimal data structure for implementing priority queues with O(log n) insertion and deletion time complexity.
In a Max-Heap, the element with the highest priority is always at the root, enabling efficient extraction of the maximum element.
Arrays and Linked Lists would require O(n) time for priority-based operations, while Graphs are unsuitable for this purpose.
Heaps are fundamental in algorithms like Dijkstra's and Prim's for finding shortest paths.
Hash Tables achieve O(1) average-case time complexity for both insertion and search operations when using a well-designed hash function that minimizes collisions and maintains a good load factor.
Deletion also operates in O(1) average time.
However, in worst-case scenarios with poor hash functions or high collision rates, these operations can degrade to O(n).
The key to performance is maintaining low collision rates through techniques like chaining or open addressing.
int main() {
char arr[] = "GATE";
char *p = arr;
printf("%d\n", sizeof(arr));
printf("%d", sizeof(p));
return 0;
}
The output is 5 and 8 because sizeof(arr) returns 5 bytes (4 characters plus 1 null terminator in the string "GATE"), while sizeof(p) returns 8 bytes on a 64-bit system since p is a pointer variable and pointers occupy 8 bytes in 64-bit architecture. The key concept is that sizeof() behaves differently for arrays versus pointers: when applied to an array, it returns the total memory allocated for the entire array, but when applied to a pointer, it only returns the size of the pointer itself, not what it points to. Other options would be incorrect because they either miscalculate the array size by forgetting the null terminator, use incorrect pointer sizes (like 4 bytes for 32-bit systems), or confuse the size of the array with the size of the pointer variable.
int main() {
int x = 5;
int *p = &x;
int **q = &p;
printf("%d", **q + 1);
return 0;
}
The correct answer is 6 because q is a pointer to pointer that points to p, which points to x. When we dereference q, we get the value of x (which is 5), and then adding 1 gives us 6, which is printed. The key concept here is understanding pointer dereferencing: a single asterisk dereferences one level, so double asterisks dereference two levels, bringing us from q to p to the actual value stored in x. Any other answer would be incorrect because it would either misunderstand the double dereferencing operation or incorrectly interpret what value is being accessed—for instance, if someone only dereferenced once or forgot to add 1, they would get a different result.