I am getting confused with size_t
in C. I know that it is returned by the sizeof
operator. But what exactly is it? Is it a data type?
Let's say I have a for
loop:
for(i = 0; i < some_size; i++)
Should I use int i;
or size_t i;
?
int
if some_size
is signed, size_t
if it is unsigned.
int i
may not be enough to address a huge array. So by using size_t i
you can address more indices, so even if you have a huge array that should not be a problem. size_t
is a data type: usually a unsigned long int
but this depends on your system.
According to the 1999 ISO C standard (C99), size_t is an unsigned integer type of at least 16 bit (see sections 7.17 and 7.18.3). size_tis an unsigned data type defined by several C/C++ standards, e.g. the C99 ISO/IEC 9899 standard, that is defined in stddef.h.1 It can be further imported by inclusion of stdlib.h as this file internally sub includes stddef.h. This type is used to represent the size of an object. Library functions that take or return sizes expect them to be of type or have the return type of size_t. Further, the most frequently used compiler-based operator sizeof should evaluate to a constant value that is compatible with size_t.
As an implication, size_t
is a type guaranteed to hold any array index.
size_t
is an unsigned type. So, it cannot represent any negative values(<0). You use it when you are counting something, and are sure that it cannot be negative. For example, strlen()
returns a size_t
because the length of a string has to be at least 0.
In your example, if your loop index is going to be always greater than 0, it might make sense to use size_t
, or any other unsigned data type.
When you use a size_t
object, you have to make sure that in all the contexts it is used, including arithmetic, you want non-negative values. For example, let's say you have:
size_t s1 = strlen(str1);
size_t s2 = strlen(str2);
and you want to find the difference of the lengths of str2
and str1
. You cannot do:
int diff = s2 - s1; /* bad */
This is because the value assigned to diff
is always going to be a positive number, even when s2 < s1
, because the calculation is done with unsigned types. In this case, depending upon what your use case is, you might be better off using int
(or long long
) for s1
and s2
.
There are some functions in C/POSIX that could/should use size_t
, but don't because of historical reasons. For example, the second parameter to fgets
should ideally be size_t
, but is int
.
size_t
? 2) why should I prefer size_t
over something like unsigned int
?
size_t
is sizeof(size_t)
. The C standard guarantees that SIZE_MAX
will be at least 65535. size_t
is the type returned by sizeof
operator, and is used in the standard library (for example strlen
returns size_t
). As Brendan said, size_t
need not be the same as unsigned int
.
size_t
is guaranteed to be an unsigned type.
s2 - s1
overflows an int
, the behavior is undefined.
size_t
is a type that can hold any array index.
Depending on the implementation, it can be any of:
unsigned char
unsigned short
unsigned int
unsigned long
unsigned long long
Here's how size_t
is defined in stddef.h
of my machine:
typedef unsigned long size_t;
unsigned long
is 32-bit, size_t
is 64-bit.
size_t
is always 32bits on 32-bits machine, 64bits likewise?
unsigned char
?
uint_least16_t
is what's at least 16 bits. About, size_t
, the standard says "unsigned integral type of the result of the sizeof operator" and "The sizeof operator yields the size (in bytes) of its operand".
unsigned char
cannot be 16 bits?!
If you are the empirical type,
echo | gcc -E -xc -include 'stddef.h' - | grep size_t
Output for Ubuntu 14.04 64-bit GCC 4.8:
typedef long unsigned int size_t;
Note that stddef.h
is provided by GCC and not glibc under src/gcc/ginclude/stddef.h
in GCC 4.2.
Interesting C99 appearances
malloc takes size_t as an argument, so it determines the maximum size that may be allocated. And since it is also returned by sizeof, I think it limits the maximum size of any array. See also: What is the maximum size of an array in C?
To go into why size_t
needed to exist and how we got here:
In pragmatic terms, size_t
and ptrdiff_t
are guaranteed to be 64 bits wide on a 64-bit implementation, 32 bits wide on a 32-bit implementation, and so on. They could not force any existing type to mean that, on every compiler, without breaking legacy code.
A size_t
or ptrdiff_t
is not necessarily the same as an intptr_t
or uintptr_t
. They were different on certain architectures that were still in use when size_t
and ptrdiff_t
were added to the Standard in the late 1980s, and becoming obsolete when C99 added many new types but not gone yet (such as 16-bit Windows). The x86 in 16-bit protected mode had a segmented memory where the largest possible array or structure could be only 65,536 bytes in size, but a far
pointer needed to be 32 bits wide, wider than the registers. On those, intptr_t
would have been 32 bits wide but size_t
and ptrdiff_t
could be 16 bits wide and fit in a register. And who knew what kind of operating system might be written in the future? In theory, the i386 architecture offers a 32-bit segmentation model with 48-bit pointers that no operating system has ever actually used.
The type of a memory offset could not be long
because far too much legacy code assumes that long
is exactly 32 bits wide. This assumption was even built into the UNIX and Windows APIs. Unfortunately, a lot of other legacy code also assumed that a long
is wide enough to hold a pointer, a file offset, the number of seconds that have elapsed since 1970, and so on. POSIX now provides a standardized way to force the latter assumption to be true instead of the former, but neither is a portable assumption to make.
It couldn’t be int
because only a tiny handful of compilers in the ’90s made int
64 bits wide. Then they really got weird by keeping long
32 bits wide. The next revision of the Standard declared it illegal for int
to be wider than long
, but int
is still 32 bits wide on most 64-bit systems.
It couldn’t be long long int
, which anyway was added later, since that was created to be at least 64 bits wide even on 32-bit systems.
So, a new type was needed. Even if it weren’t, all those other types meant something other than an offset within an array or object. And if there was one lesson from the fiasco of 32-to-64-bit migration, it was to be specific about what properties a type needed to have, and not use one that meant different things in different programs.
size_t
and ptrdiff_t
are guaranteed to be 64 bits wide on a 64-bit implementation", etc. The guarantee is overstated. The range of size_t
is primarily driven by the memory capacity of the implementation. "a n-bit implementation" is primarily the native processor width of integers. Certainly many implementations use a similar size memory and processor bus width, but wide native integers with scant memory or narrow processors with lots of memory exist and do drive these two implementation properties apart.
Since nobody has yet mentioned it, the primary linguistic significance of size_t
is that the sizeof
operator returns a value of that type. Likewise, the primary significance of ptrdiff_t
is that subtracting one pointer from another will yield a value of that type. Library functions that accept it do so because it will allow such functions to work with objects whose size exceeds UINT_MAX on systems where such objects could exist, without forcing callers to waste code passing a value larger than "unsigned int" on systems where the larger type would suffice for all possible objects.
malloc()
. Personally, I would have liked to have seen versions which take arguments of type int
, long
, and long long
, with some implementations promoting shorter types and others implementing e.g. lmalloc(long n) {return (n < 0 || n > 32767) ? 0 : imalloc(n);}
[on some platforms, calling to imalloc(123)
would be cheaper than calling lmalloc(123);
, and even on a platform where size_t
is 16 bits, code which wants to allocate size computed in a ` long` value...
size_t
and int
are not interchangeable. For instance on 64-bit Linux size_t
is 64-bit in size (i.e. sizeof(void*)
) but int
is 32-bit.
Also note that size_t
is unsigned. If you need signed version then there is ssize_t
on some platforms and it would be more relevant to your example.
As a general rule I would suggest using int
for most general cases and only use size_t
/ssize_t
when there is a specific need for it (with mmap()
for example).
size_t
is an unsigned integer data type which can assign only 0 and greater than 0 integer values. It measure bytes of any object's size and is returned by sizeof
operator.
const
is the syntax representation of size_t
, but without const
you can run the program.
const size_t number;
size_t
regularly used for array indexing and loop counting. If the compiler is 32-bit
it would work on unsigned int
. If the compiler is 64-bit
it would work on unsigned long long int
also. There for maximum size of size_t
depending on the compiler type.
size_t
already defined in the <stdio.h>
header file, but it can also be defined by the <stddef.h>
, <stdlib.h>
, <string.h>
, <time.h>
, and <wchar.h>
headers.
Example (with const)
#include <stdio.h>
int main()
{
const size_t value = 200;
size_t i;
int arr[value];
for (i = 0 ; i < value ; ++i)
{
arr[i] = i;
}
size_t size = sizeof(arr);
printf("size = %zu\n", size);
}
Output: size = 800
Example (without const)
#include <stdio.h>
int main()
{
size_t value = 200;
size_t i;
int arr[value];
for (i = 0; i < value; ++i)
{
arr[i] = i;
}
size_t size = sizeof(arr);
printf("size = %zu\n", size);
}
Output: size = 800
size_t is unsigned integer data type. On systems using the GNU C Library, this will be unsigned int or unsigned long int. size_t is commonly used for array indexing and loop counting.
In general, if you are starting at 0 and going upward, always use an unsigned type to avoid an overflow taking you into a negative value situation. This is critically important, because if your array bounds happens to be less than the max of your loop, but your loop max happens to be greater than the max of your type, you will wrap around negative and you may experience a segmentation fault (SIGSEGV). So, in general, never use int for a loop starting at 0 and going upwards. Use an unsigned.
size_t
is a typedef which is used to represent the size of any object in bytes. (Typedefs are used to create an additional name/alias for another data type, but does not create a new type.)
Find it defined in stddef.h
as follows:
typedef unsigned long long size_t;
size_t
is also defined in the <stdio.h>
.
size_t
is used as the return type by the sizeof operator.
Use size_t
, in conjunction with sizeof, to define the data type of the array size argument as follows:
#include <stdio.h>
void disp_ary(int *ary, size_t ary_size)
{
for (int i = 0; i < ary_size; i++)
{
printf("%d ", ary[i]);
}
}
int main(void)
{
int arr[] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 0};
int ary_size = sizeof(arr)/sizeof(int);
disp_ary(arr, ary_size);
return 0;
}
size_t
is guaranteed to be big enough to contain the size of the biggest object the host system can handle.
Note that an array's size limitation is really a factor the system's stack size limitations where this code is compiled and executed. You should be able to adjust the stack size at link time (see ld
commands's --stack-size
parameter).
To give you an idea of approximate stack sizes:
4K on an embedded device
1M on Win10
7.4M on Linux
Many C library functions like malloc
, memcpy
and strlen
declare their arguments and return type as size_t
.
size_t
affords the programmer with the ability to deal with different types, by adding/subtracting the number of elements required instead of using the offset in bytes.
Let's get a deeper appreciate for what size_t
can do for us by examining its usage in pointer arithmetic operations of a C string and an integer array:
Here's an example using a C string:
const char* reverse(char *orig)
{
size_t len = strlen(orig);
char *rev = orig + len - 1;
while (rev >= orig)
{
printf("%c", *rev);
rev = rev - 1; // <= See below
}
return rev;
}
int main() {
char *string = "123";
printf("%c", reverse(string));
}
// Output: 321
0x7ff626939004 "123" // <= orig
0x7ff626939006 "3" // <= rev - 1 of 3
0x7ff626939005 "23" // <= rev - 2 of 3
0x7ff626939004 "123" // <= rev - 3 of 3
0x7ff6aade9003 "" // <= rev is indeterminant. This can be exploited as an out of bounds bug to read memory contents that this program has no business reading.
That's not very helpful in understanding the benefits of using size_t
since a character is one byte, regardless of your architecture.
When we're dealing with numerical types, size_t
becomes very beneficial.
size_t
type is like an integer with benefits that can hold a physical memory address; That address changes its size according to the type of platform in which it is executed.
Here's how we can leverage sizeof and size_t when passing an array of ints:
void print_reverse(int *orig, size_t ary_size)
{
int *rev = orig + ary_size - 1;
while (rev >= orig)
{
printf("%i", *rev);
rev = rev - 1;
}
}
int main()
{
int nums[] = {1, 2, 3};
print_reverse(nums, sizeof(nums)/sizeof(*nums));
return 0;
}
0x617d3ffb44 1 // <= orig
0x617d3ffb4c 3 // <= rev - 1 of 3
0x617d3ffb48 2 // <= rev - 2 of 3
0x617d3ffb44 1 // <= rev - 3 of 3
Above, we see than an int takes 4 bytes (and since there are 8 bits per byte, an int occupies 32 bits).
If we were to create an array of longs we'd discover that a long takes 64 bits on a linux64 operating system, but only 32 bits on a Win64 system. Hence, using t_size
, will save a lot of coding and potential bugs, especially when running C code that performs Address Arithmetic on different architectures.
So the moral of this story is "Use size_t
and let your C-compiler do the error-prone work of pointer arithmetic."
size_t or any unsigned type might be seen used as loop variable as loop variables are typically greater than or equal to 0.
When we use a size_t object, we have to make sure that in all the contexts it is used, including arithmetic, we want only non-negative values. For instance, following program would definitely give the unexpected result:
// C program to demonstrate that size_t or
// any unsigned int type should be used
// carefully when used in a loop
#include<stdio.h>
int main()
{
const size_t N = 10;
int a[N];
// This is fine
for (size_t n = 0; n < N; ++n)
a[n] = n;
// But reverse cycles are tricky for unsigned
// types as can lead to infinite loop
for (size_t n = N-1; n >= 0; --n)
printf("%d ", a[n]);
}
Output
Infinite loop and then segmentation fault
This is a platform-specific typedef
. For example, on a particular machine, it might be unsigned int
or unsigned long
. You should use this definition for more portability of your code.
From my understanding, size_t
is an unsigned
integer whose bit size is large enough to hold a pointer of the native architecture.
So:
sizeof(size_t) >= sizeof(void*)
size_t
. Several example: C compilers on x86 real mode can have 32 bit FAR
or HUGE
pointers but size_t is still 16 bits. Another example: Watcom C used to have a special fat pointer for extended memory that was 48 bits wide, but size_t
was not. On embedded controller with Harvard architecture, you have no correlation either, because both concerns different address spaces.
size_t
Success story sharing
size_t
is for objects in memory. The C standard doesn't even definestat()
oroff_t
(those are POSIX definitions) or anything to do with disks or file systems - it stops itself atFILE
streams. Virtual memory management is completely different from file systems and file management as far as size requirements go, so mentioningoff_t
is irrelevant here.size_t
as the type of the result of thesizeof
operator (7.17p2 about<stddef.h>
). Section 6.5 explains exactly how C expressions work (6.5.3.4 forsizeof
). Since you cannot applysizeof
to a disk file (mostly because C doesn't even define how disks and files work), there is no room for confusion. In other words, blame Wikipedia (and this answer for quoting Wikipedia and not the actual C standard).