Both types of allocation can run on any sort of system, though the complexity of the dynamic allocator may change.
Dynamic allocation is done using runtime calls, so the program can react to what's needed at the time. Static allocation is decided ahead of time instead.
Both types of allocation have the same memory constraints as the system itself. So in theory at least, they could have access to the same amount of memory.
Do C's memory management functions like `malloc` and `free` validate the addresses passed to them?
An allocator may choose to be strict about the parameters it accepts but the C specification does not require it to be. Generally this strictness can be controlled with debugging or hardening options.
When writing your own allocators, you get to decide how to handle invalid data.
If the allocator was used mainly for very small allocations (less than 8 bytes), what concern would you have?
Everything mentioned is a concern here, and is why some allocators prefer to use "pools" or "buckets" for very small allocations.
The allocator can make assumptions about these special areas that reduce the time taken to find a free range, and the overhead of recording the information about the ranges.
In this case, the performance of the heap would be ok to begin with but, as the program continues, more and more small ranges pile up, leading to poorer performance later.
Unpredictable heap performance is a problem for real time applications, such as video games.