Q. What is the difference between far and near ?
Compilers for PC compatibles use two types of pointers.
near pointers are 16 bits long and can address a 64KB range. far pointers are 32 bits long and can address a 1MB range.
near pointers operate within a 64KB segment. There's one segment for function addresses and one segment for data.
far pointers have a 16-bit base (the segment address) and a 16-bit offset. The base is multiplied by 16, so a far pointer is effectively 20 bits long. For example, if a far pointer had a segment of 0x7000 and an offset of 0x1224, the pointer would refer to address 0x71224. A far pointer with a segment of 0x7122 and an offset of 0x0004 would refer to the same address.
Before you compile your code, you must tell the compiler which memory model to use. If you use a small- code memory model, near pointers are used by default for function addresses. That means that all the functions need to fit in one 64KB segment. With a large-code model, the default is to use far function addresses. You'll get near pointers with a small data model, and far pointers with a large data model. These are just the defaults; you can declare variables and functions as explicitly near or far.
far pointers are a little slower. Whenever one is used, the code or data segment register needs to be swapped out.far pointers also have odd semantics for arithmetic and comparison. For example, the two far pointers in the preceding example point to the same address, but they would compare as different! If your program fits in a small-data, small-code memory model, your life will be easier. If it doesn't, there's not much you can do.
If it sounds confusing, it is. There are some additional, compiler-specific wrinkles. Check your compiler manuals for details.
Q. When should a far pointer be used?
Sometimes you can get away with using a small memory model in most of a given program. There might be just a few things that don't fit in your small data and code segments.
When that happens, you can use explicit far pointers and function declarations to get at the rest of memory. A farfunction can be outside the 64KB segment most functions are shoehorned into for a small-code model. (Often, libraries are declared explicitly far, so they'll work no matter what code model the program uses.)
A far pointer can refer to information outside the 64KB data segment. Typically, such pointers are used withfarmalloc() and such, to manage a heap separate from where all the rest of the data lives.
If you use a small-data, large-code model, you should explicitly make your function pointers far.
Q. What is the stack?
A "stack trace" is a list of which functions have been called, based on this information. When you start using a debugger, one of the first things you should learn is how to get a stack trace.
The stack is very inflexible about allocating memory; everything must be deallocated in exactly the reverse order it was allocated in. For implementing function calls, that is all that's needed. Allocating memory off the stack is extremely efficient. One of the reasons C compilers generate such good code is their heavy use of a simple stack.
There used to be a C function that any programmer could use for allocating memory off the stack. The memory was automatically deallocated when the calling function returned. This was a dangerous function to call; it's not available anymore.
Q. What is the heap?
The heap is where malloc(), calloc(), and realloc() get memory.
Getting memory from the heap is much slower than getting it from the stack. On the other hand, the heap is much more flexible than the stack. Memory can be allocated at any time and deallocated in any order. Such memory isn't deallocated automatically; you have to call free().
Recursive data structures are almost always implemented with memory from the heap. Strings often come from there too, especially strings that could be very long at runtime.
If you can keep data in a local variable (and allocate it from the stack), your code will run faster than if you put the data on the heap. Sometimes you can use a better algorithm if you use the heap—faster, or more robust, or more flexible. It's a tradeoff.
If memory is allocated from the heap, it's available until the program ends. That's great if you remember to deallocate it when you're done. If you forget, it's a problem. A "memory leak" is some allocated memory that's no longer needed but isn't deallocated. If you have a memory leak inside a loop, you can use up all the memory on the heap and not be able to get any more. (When that happens, the allocation functions return a null pointer.) In some environments, if a program doesn't deallocate everything it allocated, memory stays unavailable even after the program ends.
Some programming languages don't make you deallocate memory from the heap. Instead, such memory is "garbage collected" automatically. This maneuver leads to some very serious performance issues. It's also a lot harder to implement. That's an issue for the people who develop compilers, not the people who buy them. (Except that software that's harder to implement often costs more.) There are some garbage collection libraries for C, but they're at the bleeding edge of the state of the art.
Comments
Post a Comment