ChatGPT解决这个技术问题 Extra ChatGPT

Is there a printf converter to print in binary format?

I can print with printf as a hex or octal number. Is there a format tag to print as binary, or arbitrary base?

I am running gcc.

printf("%d %x %o\n", 10, 10, 10); //prints "10 A 12\n"
print("%b\n", 10); // prints "%b\n"
You can not do this, as far as I know, using printf. You could, obviously, write a helper method to accomplish this, but that doesn't sound like the direction you're wanting to go.
There isn't a format predefined for that. You need to transform it yourself to a string and then print the string.
A quick Google search produced this page with some information that may be useful: forums.macrumors.com/archive/index.php/t-165959.html
Not as part of the ANSI Standard C Library -- if you're writing portable code, the safest method is to roll your own.
One statement standard and generic (for any Integral type of any length) solution of the conversion to binary string on C++: stackoverflow.com/a/31660310/1814353

W
William Whyte

Hacky but works for me:

#define BYTE_TO_BINARY_PATTERN "%c%c%c%c%c%c%c%c"
#define BYTE_TO_BINARY(byte)  \
  (byte & 0x80 ? '1' : '0'), \
  (byte & 0x40 ? '1' : '0'), \
  (byte & 0x20 ? '1' : '0'), \
  (byte & 0x10 ? '1' : '0'), \
  (byte & 0x08 ? '1' : '0'), \
  (byte & 0x04 ? '1' : '0'), \
  (byte & 0x02 ? '1' : '0'), \
  (byte & 0x01 ? '1' : '0') 
printf("Leading text "BYTE_TO_BINARY_PATTERN, BYTE_TO_BINARY(byte));

For multi-byte types

printf("m: "BYTE_TO_BINARY_PATTERN" "BYTE_TO_BINARY_PATTERN"\n",
  BYTE_TO_BINARY(m>>8), BYTE_TO_BINARY(m));

You need all the extra quotes unfortunately. This approach has the efficiency risks of macros (don't pass a function as the argument to BYTE_TO_BINARY) but avoids the memory issues and multiple invocations of strcat in some of the other proposals here.


And has the advantage also to be invocable multiple times in a printf which the ones with static buffers can't.
I've taken the liberty to change the %d to %c, because it should be even faster (%d has to perform digit->char conversion, while %c simply outputs the argument
Posted an expanded version of this macro with 16, 32, 64 bit int support: stackoverflow.com/a/25108449/432509
Note that this approach is not stack friendly. Assuming int is 32-bits on system, printing single 32-bit value will require space for 32 * 4-byte values; total of 128 bytes. Which, depending on stack size, may or may not be an issue.
its important to add parentheses around byte in the macro or you could run into problems when sending an operation BYTE_TO_BINARY(a | b) -> a | b & 0x01 != (a | b) & 0x01
i
ib.

Print Binary for Any Datatype

// Assumes little endian
void printBits(size_t const size, void const * const ptr)
{
    unsigned char *b = (unsigned char*) ptr;
    unsigned char byte;
    int i, j;
    
    for (i = size-1; i >= 0; i--) {
        for (j = 7; j >= 0; j--) {
            byte = (b[i] >> j) & 1;
            printf("%u", byte);
        }
    }
    puts("");
}

Test:

int main(int argv, char* argc[])
{
    int i = 23;
    uint ui = UINT_MAX;
    float f = 23.45f;
    printBits(sizeof(i), &i);
    printBits(sizeof(ui), &ui);
    printBits(sizeof(f), &f);
    return 0;
}

Suggest size_t i; for (i=size; i-- > 0; ) to avoid size_t vs. int mis-match.
Could someone please elaborate on the logic behind this code?
Take each byte in ptr (outer loop); then for each bit the current byte (inner loop), mask the byte by the current bit (1 << j). Shift that right resulting in a byte containing 0 (0000 0000b) or 1 (0000 0001b). Print the resulting byte printf with format %u. HTH.
@ZX9 Notice that the suggested code used > with size_t and not the >= of your comment to determine when to terminate the loop.
@ZX9 Still a useful original comment of yours as coders do need to be careful considering the edge case use of > and >= with unsigned types. 0 is an unsigned edge case and commonly occurs, unlike signed math with less common INT_MAX/INT_MIN.
E
EvilTeach

Here is a quick hack to demonstrate techniques to do what you want.

#include <stdio.h>      /* printf */
#include <string.h>     /* strcat */
#include <stdlib.h>     /* strtol */

const char *byte_to_binary
(
    int x
)
{
    static char b[9];
    b[0] = '\0';

    int z;
    for (z = 128; z > 0; z >>= 1)
    {
        strcat(b, ((x & z) == z) ? "1" : "0");
    }

    return b;
}

int main
(
    void
)
{
    {
        /* binary string to int */

        char *tmp;
        char *b = "0101";

        printf("%d\n", strtol(b, &tmp, 2));
    }

    {
        /* byte to binary string */

        printf("%s\n", byte_to_binary(5));
    }
    
    return 0;
}

This is certainly less "weird" than custom writing an escape overload for printf. It's simple to understand for a developer new to the code, as well.
A few changes: strcat is an inefficient method of adding a single char to the string on each pass of the loop. Instead, add a char *p = b; and replace the inner loop with *p++ = (x & z) ? '1' : '0'. z should start at 128 (2^7) instead of 256 (2^8). Consider updating to take a pointer to the buffer to use (for thread safety), similar to inet_ntoa().
@EvilTeach: You're using a ternary operator yourself as a parameter to strcat()! I agree that strcat is probably easier to understand than post-incrementing a dereferenced pointer for the assignment, but even beginners need to know how to properly use the standard library. Maybe using an indexed array for assignment would have been a good demonstration (and will actually work, since b isn't reset to all-zeros each time you call the function).
Random: The binary buffer char is static, and is cleared to all zeros in the assignment. This will only clear it the first time it's run, and after that it wont clear, but instead use the last value.
Also, this should document that the previous result will be invalid after calling the function again, so callers should not try to use it like this: printf("%s + %s = %s", byte_to_binary(3), byte_to_binary(4), byte_to_binary(3+4)).
D
DGentry

There isn't a binary conversion specifier in glibc normally.

It is possible to add custom conversion types to the printf() family of functions in glibc. See register_printf_function for details. You could add a custom %b conversion for your own use, if it simplifies the application code to have it available.

Here is an example of how to implement a custom printf formats in glibc.


warning: 'register_printf_function' is deprecated [-Wdeprecated-declarations] There is a new function to do the same, though: register_printf_specifier(). An example of the new usage can be found here: codereview.stackexchange.com/q/219994/200418
P
Peter Mortensen

You could use a small table to improve speed1. Similar techniques are useful in the embedded world, for example, to invert a byte:

const char *bit_rep[16] = {
    [ 0] = "0000", [ 1] = "0001", [ 2] = "0010", [ 3] = "0011",
    [ 4] = "0100", [ 5] = "0101", [ 6] = "0110", [ 7] = "0111",
    [ 8] = "1000", [ 9] = "1001", [10] = "1010", [11] = "1011",
    [12] = "1100", [13] = "1101", [14] = "1110", [15] = "1111",
};

void print_byte(uint8_t byte)
{
    printf("%s%s", bit_rep[byte >> 4], bit_rep[byte & 0x0F]);
}

1 I'm mostly referring to embedded applications where optimizers are not so aggressive and the speed difference is visible.


it works! but what is that syntax used to define bit_rep ?
This code looks great. But how would you update this code to handle uint16_t, uint32_t and uint64_t?
@Robk, 4, 8 and 16 %ss and the same number of bit_rep[word >> 4K & 0xF..F] arguments should do. Although I would argue 16 string prints for a 64-bit number is probably not going to be any faster than looping 64 times and outputting 0/1.
i
isrnick

Print the least significant bit and shift it out on the right. Doing this until the integer becomes zero prints the binary representation without leading zeros but in reversed order. Using recursion, the order can be corrected quite easily.

#include <stdio.h>

void print_binary(unsigned int number)
{
    if (number >> 1) {
        print_binary(number >> 1);
    }
    putc((number & 1) ? '1' : '0', stdout);
}

To me, this is one of the cleanest solutions to the problem. If you like 0b prefix and a trailing new line character, I suggest wrapping the function.

Online demo


you also should use unsigned int number, because when the given number is negative, the function enters in a never-ending recursive call.
More efficient approach, since in ASCII, '0'+1='1': putc('0'+(number&1), stdout);
I've changed the function to also work with int values equal or less than 0.
pass following value 0x80 to your function, the result is not as intended.
i
ideasman42

Based on @William Whyte's answer, this is a macro that provides int8,16,32 & 64 versions, reusing the INT8 macro to avoid repetition.

/* --- PRINTF_BYTE_TO_BINARY macro's --- */
#define PRINTF_BINARY_PATTERN_INT8 "%c%c%c%c%c%c%c%c"
#define PRINTF_BYTE_TO_BINARY_INT8(i)    \
    (((i) & 0x80ll) ? '1' : '0'), \
    (((i) & 0x40ll) ? '1' : '0'), \
    (((i) & 0x20ll) ? '1' : '0'), \
    (((i) & 0x10ll) ? '1' : '0'), \
    (((i) & 0x08ll) ? '1' : '0'), \
    (((i) & 0x04ll) ? '1' : '0'), \
    (((i) & 0x02ll) ? '1' : '0'), \
    (((i) & 0x01ll) ? '1' : '0')

#define PRINTF_BINARY_PATTERN_INT16 \
    PRINTF_BINARY_PATTERN_INT8              PRINTF_BINARY_PATTERN_INT8
#define PRINTF_BYTE_TO_BINARY_INT16(i) \
    PRINTF_BYTE_TO_BINARY_INT8((i) >> 8),   PRINTF_BYTE_TO_BINARY_INT8(i)
#define PRINTF_BINARY_PATTERN_INT32 \
    PRINTF_BINARY_PATTERN_INT16             PRINTF_BINARY_PATTERN_INT16
#define PRINTF_BYTE_TO_BINARY_INT32(i) \
    PRINTF_BYTE_TO_BINARY_INT16((i) >> 16), PRINTF_BYTE_TO_BINARY_INT16(i)
#define PRINTF_BINARY_PATTERN_INT64    \
    PRINTF_BINARY_PATTERN_INT32             PRINTF_BINARY_PATTERN_INT32
#define PRINTF_BYTE_TO_BINARY_INT64(i) \
    PRINTF_BYTE_TO_BINARY_INT32((i) >> 32), PRINTF_BYTE_TO_BINARY_INT32(i)
/* --- end macros --- */

#include <stdio.h>
int main() {
    long long int flag = 1648646756487983144ll;
    printf("My Flag "
           PRINTF_BINARY_PATTERN_INT64 "\n",
           PRINTF_BYTE_TO_BINARY_INT64(flag));
    return 0;
}

This outputs:

My Flag 0001011011100001001010110111110101111000100100001111000000101000

For readability you may want to add a separator for eg:

My Flag 00010110,11100001,00101011,01111101,01111000,10010000,11110000,00101000

This is excellent. Is there a particular reason for printing the bits starting with Least Significant Bits?
how would you recommend adding the comma?
Would add a grouped version of PRINTF_BYTE_TO_BINARY_INT# defines to optionally use.
i
ib.

Here's a version of the function that does not suffer from reentrancy issues or limits on the size/type of the argument:

#define FMT_BUF_SIZE (CHAR_BIT*sizeof(uintmax_t)+1)

char *binary_fmt(uintmax_t x, char buf[static FMT_BUF_SIZE])
{
    char *s = buf + FMT_BUF_SIZE;
    *--s = 0;
    if (!x) *--s = '0';
    for (; x; x /= 2) *--s = '0' + x%2;
    return s;
}

Note that this code would work just as well for any base between 2 and 10 if you just replace the 2's by the desired base. Usage is:

char tmp[FMT_BUF_SIZE];
printf("%s\n", binary_fmt(x, tmp));

Where x is any integral expression.


Yes, you can do that. But it's really bad design. Even if you don't have threads or reentrancy, the caller has to be aware that the static buffer is being reused, and that things like char *a = binary_fmt(x), *b = binary_fmt(y); will not work as expected. Forcing the caller to pass a buffer makes the storage requirement explict; the caller is of course free to use a static buffer if that's really desired, and then the reuse of the same buffer becomes explicit. Also note that, on modern PIC ABIs, static buffers usually cost more code to access than buffers on the stack.
That's still a bad design. It requires an extra copying step in those cases, and it's no less expensive than having the caller provide the buffer even in cases where copying wouldn't be required. Using static storage is just a bad idiom.
Having to pollute the namespace of either the preprocessor or variable symbol table with an unnecessary extra name that must be used to properly size the storage that must be allocated by every caller, and forcing every caller to know this value and to allocate the necessary amount of storage, is bad design when the simpler function-local storage solution will suffice for most intents and purposes, and when a simple strdup() call covers 99% of the rest of uses.
Here we're going to have to disagree. I can't see how adding one unobtrusive preprocessor symbol comes anywhere near the harmfulness of limiting the usage cases severely, making the interface error-prone, reserving permanent storage for the duration of the program for a temporary value, and generating worse code on most modern platforms.
I don't advocate micro-optimizing without reason (i.e. measurements). But I do think performance, even if it's on the micro-gain scale, is worth mentioning when it comes as a bonus along with a fundamentally superior design.
R
Robotbugs

Quick and easy solution:

void printbits(my_integer_type x)
{
    for(int i=sizeof(x)<<3; i; i--)
        putchar('0'+((x>>(i-1))&1));
}

Works for any size type and for signed and unsigned ints. The '&1' is needed to handle signed ints as the shift may do sign extension.

There are so many ways of doing this. Here's a super simple one for printing 32 bits or n bits from a signed or unsigned 32 bit type (not putting a negative if signed, just printing the actual bits) and no carriage return. Note that i is decremented before the bit shift:

#define printbits_n(x,n) for (int i=n;i;i--,putchar('0'|(x>>i)&1))
#define printbits_32(x) printbits_n(x,32)

What about returning a string with the bits to store or print later? You either can allocate the memory and return it and the user has to free it, or else you return a static string but it will get clobbered if it's called again, or by another thread. Both methods shown:

char *int_to_bitstring_alloc(int x, int count)
{
    count = count<1 ? sizeof(x)*8 : count;
    char *pstr = malloc(count+1);
    for(int i = 0; i<count; i++)
        pstr[i] = '0' | ((x>>(count-1-i))&1);
    pstr[count]=0;
    return pstr;
}

#define BITSIZEOF(x)    (sizeof(x)*8)

char *int_to_bitstring_static(int x, int count)
{
    static char bitbuf[BITSIZEOF(x)+1];
    count = (count<1 || count>BITSIZEOF(x)) ? BITSIZEOF(x) : count;
    for(int i = 0; i<count; i++)
        bitbuf[i] = '0' | ((x>>(count-1-i))&1);
    bitbuf[count]=0;
    return bitbuf;
}

Call with:

// memory allocated string returned which needs to be freed
char *pstr = int_to_bitstring_alloc(0x97e50ae6, 17);
printf("bits = 0b%s\n", pstr);
free(pstr);

// no free needed but you need to copy the string to save it somewhere else
char *pstr2 = int_to_bitstring_static(0x97e50ae6, 17);
printf("bits = 0b%s\n", pstr2);

I'm testing this and it looks like both *int_to_bitstring_ methods do not calculate the results properly, or am I missing something? printbits works fine. Also, for decimals larger than 32 results of static and alloc methods begins to differ. Not much experience in C and working with bits yet.
i
ib.
const char* byte_to_binary(int x)
{
    static char b[sizeof(int)*8+1] = {0};
    int y;
    long long z;

    for (z = 1LL<<sizeof(int)*8-1, y = 0; z > 0; z >>= 1, y++) {
        b[y] = (((x & z) == z) ? '1' : '0');
    }
    b[y] = 0;

    return b;
}

Nice solution. I would change some stuff though. I.e. going backward in the string so that input of any size could be handled properly.
All those 8s should be replaced by CHAR_BIT.
I like that id does not use string libraries of any kind and thus can be used in an embedding setting easily
using static variables is really bad for this function. imagine printf(byte_to_binary(1), byte_to_binary(5)), where one call would override the string from the other call
c
chux - Reinstate Monica

Is there a printf converter to print in binary format?

The printf() family is only able to print integers in base 8, 10, and 16 using the standard specifiers directly. I suggest creating a function that converts the number to a string per code's particular needs.

To print in any base [2-36]

All other answers so far have at least one of these limitations.

Use static memory for the return buffer. This limits the number of times the function may be used as an argument to printf(). Allocate memory requiring the calling code to free pointers. Require the calling code to explicitly provide a suitable buffer. Call printf() directly. This obliges a new function for to fprintf(), sprintf(), vsprintf(), etc. Use a reduced integer range.

The following has none of the above limitation. It does require C99 or later and use of "%s". It uses a compound literal to provide the buffer space. It has no trouble with multiple calls in a printf().

#include <assert.h>
#include <limits.h>
#define TO_BASE_N (sizeof(unsigned)*CHAR_BIT + 1)

//                               v--compound literal--v
#define TO_BASE(x, b) my_to_base((char [TO_BASE_N]){""}, (x), (b))

// Tailor the details of the conversion function as needed
// This one does not display unneeded leading zeros
// Use return value, not `buf`
char *my_to_base(char buf[TO_BASE_N], unsigned i, int base) {
  assert(base >= 2 && base <= 36);
  char *s = &buf[TO_BASE_N - 1];
  *s = '\0';
  do {
    s--;
    *s = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"[i % base];
    i /= base;
  } while (i);

  // Could employ memmove here to move the used buffer to the beginning
  // size_t len = &buf[TO_BASE_N] - s;
  // memmove(buf, s, len);

  return s;
}

#include <stdio.h>
int main(void) {
  int ip1 = 0x01020304;
  int ip2 = 0x05060708;
  printf("%s %s\n", TO_BASE(ip1, 16), TO_BASE(ip2, 16));
  printf("%s %s\n", TO_BASE(ip1, 2), TO_BASE(ip2, 2));
  puts(TO_BASE(ip1, 8));
  puts(TO_BASE(ip1, 36));
  return 0;
}

Output

1020304 5060708
1000000100000001100000100 101000001100000011100001000
100401404
A2F44

This is very useful. Do you know how to use it in C++? When I compile, it generates an error "Severity Code Description Project File Line Suppression State Error C4576 a parenthesized type followed by an initializer list is a non-standard explicit type conversion syntax hello C:\my_projects\hello\hello\main.cpp 39 "
@Justalearner This generates a C++ because if uses a C feature compound literal which is not part of C++. Perhaps post your C++ implementation that tries to do the same - even if incomplete, I am sure you will get help - as long as you show your attempt first.
K
Kalcifer

As of February 3rd, 2022, the GNU C Library been updated to version 2.35. As a result, %b is now supported to output in binary format.

printf-family functions now support the %b format for output of integers in binary, as specified in draft ISO C2X, and the %B variant of that format recommended by draft ISO C2X.


i
ib.

None of the previously posted answers are exactly what I was looking for, so I wrote one. It is super simple to use %B with the printf!

/*
 * File:   main.c
 * Author: Techplex.Engineer
 *
 * Created on February 14, 2012, 9:16 PM
 */

#include <stdio.h>
#include <stdlib.h>
#include <printf.h>
#include <math.h>
#include <string.h>

static int printf_arginfo_M(const struct printf_info *info, size_t n, int *argtypes)
{
    /* "%M" always takes one argument, a pointer to uint8_t[6]. */
    if (n > 0) {
        argtypes[0] = PA_POINTER;
    }
    return 1;
}

static int printf_output_M(FILE *stream, const struct printf_info *info, const void *const *args)
{
    int value = 0;
    int len;

    value = *(int **) (args[0]);

    // Beginning of my code ------------------------------------------------------------
    char buffer [50] = "";  // Is this bad?
    char buffer2 [50] = "";  // Is this bad?
    int bits = info->width;
    if (bits <= 0)
        bits = 8;  // Default to 8 bits

    int mask = pow(2, bits - 1);
    while (mask > 0) {
        sprintf(buffer, "%s", ((value & mask) > 0 ? "1" : "0"));
        strcat(buffer2, buffer);
        mask >>= 1;
    }
    strcat(buffer2, "\n");
    // End of my code --------------------------------------------------------------
    len = fprintf(stream, "%s", buffer2);
    return len;
}

int main(int argc, char** argv)
{
    register_printf_specifier('B', printf_output_M, printf_arginfo_M);

    printf("%4B\n", 65);

    return EXIT_SUCCESS;
}

will this overflow with more than 50 bits?
Good call, yeah it will... I was told I needed to use malloc, ever don that?
yes of course. super easy: char* buffer = (char*) malloc(sizeof(char) * 50);
@JanusTroelsen, or much cleaner, smaller , maintainable: char *buffer = malloc(sizeof(*buffer) * 50);
Why would "%B" be any different than "%b" in this respect? Previous answers said things like "There is no formatting function in the C standard library to output binary like that." and "Some runtimes support "%b" although that is not a standard.".
i
ib.

This code should handle your needs up to 64 bits. I created two functions: pBin and pBinFill. Both do the same thing, but pBinFill fills in the leading spaces with the fill character provided by its last argument. The test function generates some test data, then prints it out using the pBinFill function.

#define kDisplayWidth 64

char* pBin(long int x,char *so)
{
  char s[kDisplayWidth+1];
  int i = kDisplayWidth;
  s[i--] = 0x00;  // terminate string
  do {  // fill in array from right to left
    s[i--] = (x & 1) ? '1' : '0';  // determine bit
    x >>= 1;  // shift right 1 bit
  } while (x > 0);
  i++;  // point to last valid character
  sprintf(so, "%s", s+i);  // stick it in the temp string string
  return so;
}

char* pBinFill(long int x, char *so, char fillChar)
{
  // fill in array from right to left
  char s[kDisplayWidth+1];
  int i = kDisplayWidth;
  s[i--] = 0x00;  // terminate string
  do {  // fill in array from right to left
    s[i--] = (x & 1) ? '1' : '0';
    x >>= 1;  // shift right 1 bit
  } while (x > 0);
  while (i >= 0) s[i--] = fillChar;  // fill with fillChar 
  sprintf(so, "%s", s);
  return so;
}

void test()
{
  char so[kDisplayWidth+1];  // working buffer for pBin
  long int val = 1;
  do {
    printf("%ld =\t\t%#lx =\t\t0b%s\n", val, val, pBinFill(val, so, '0'));
    val *= 11;  // generate test data
  } while (val < 100000000);
}

Output:

00000001 =  0x000001 =  0b00000000000000000000000000000001
00000011 =  0x00000b =  0b00000000000000000000000000001011
00000121 =  0x000079 =  0b00000000000000000000000001111001
00001331 =  0x000533 =  0b00000000000000000000010100110011
00014641 =  0x003931 =  0b00000000000000000011100100110001
00161051 =  0x02751b =  0b00000000000000100111010100011011
01771561 =  0x1b0829 =  0b00000000000110110000100000101001
19487171 = 0x12959c3 =  0b00000001001010010101100111000011

J
John Millikin

Some runtimes support "%b" although that is not a standard.

Also see here for an interesting discussion:

http://bytes.com/forum/thread591027.html

HTH


This is actually a property of the C runtime library, not the compiler.
q
quinmars

Maybe a bit OT, but if you need this only for debuging to understand or retrace some binary operations you are doing, you might take a look on wcalc (a simple console calculator). With the -b options you get binary output.

e.g.

$ wcalc -b "(256 | 3) & 0xff"
 = 0b11

there are a few other options on this front, too... ruby -e 'printf("%b\n", 0xabc)', dc followed by 2o followed by 0x123p, and so forth.
P
Peter Mortensen

There is no formatting function in the C standard library to output binary like that. All the format operations the printf family supports are towards human readable text.


P
Peter Mortensen

The following recursive function might be useful:

void bin(int n)
{
    /* Step 1 */
    if (n > 1)
        bin(n/2);
    /* Step 2 */
    printf("%d", n % 2);
}

Be careful, this doesn't work with negative integers.
p
paniq

I optimized the top solution for size and C++-ness, and got to this solution:

inline std::string format_binary(unsigned int x)
{
    static char b[33];
    b[32] = '\0';

    for (int z = 0; z < 32; z++) {
        b[31-z] = ((x>>z) & 0x1) ? '1' : '0';
    }

    return b;
}

If you want to use dynamic memory (through std::string), you might as well get rid of the static array. Simplest way would be to just drop the static qualifier and make b local to the function.
((x>>z) & 0x01) + '0' is sufficient.
P
Peter Mortensen

Use:

char buffer [33];
itoa(value, buffer, 2);
printf("\nbinary: %s\n", buffer);

For more ref., see How to print binary number via printf.


A previous answer said "Some implementations provide itoa(), but it's not going to be in most"?
м
малин чекуров
void
print_binary(unsigned int n)
{
    unsigned int mask = 0;
    /* this grotesque hack creates a bit pattern 1000... */
    /* regardless of the size of an unsigned int */
    mask = ~mask ^ (~mask >> 1);

    for(; mask != 0; mask >>= 1) {
        putchar((n & mask) ? '1' : '0');
    }

}

Or add 0 or 1 to the character value of '0' ;) No ternary needed.
G
Geyslan G. Bem

Print bits from any type using less code and resources

This approach has as attributes:

Works with variables and literals.

Doesn't iterate all bits when not necessary.

Call printf only when complete a byte (not unnecessarily for all bits).

Works for any type.

Works with little and big endianness (uses GCC #defines for checking).

May work with hardware that char isn't a byte (eight bits). (Tks @supercat)

Uses typeof() that isn't C standard but is largely defined.

#include <stdio.h>
#include <stdint.h>
#include <string.h>
#include <limits.h>

#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
#define for_endian(size) for (int i = 0; i < size; ++i)
#elif __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__
#define for_endian(size) for (int i = size - 1; i >= 0; --i)
#else
#error "Endianness not detected"
#endif

#define printb(value)                                   \
({                                                      \
        typeof(value) _v = value;                       \
        __printb((typeof(_v) *) &_v, sizeof(_v));       \
})

#define MSB_MASK 1 << (CHAR_BIT - 1)

void __printb(void *value, size_t size)
{
        unsigned char uc;
        unsigned char bits[CHAR_BIT + 1];

        bits[CHAR_BIT] = '\0';
        for_endian(size) {
                uc = ((unsigned char *) value)[i];
                memset(bits, '0', CHAR_BIT);
                for (int j = 0; uc && j < CHAR_BIT; ++j) {
                        if (uc & MSB_MASK)
                                bits[j] = '1';
                        uc <<= 1;
                }
                printf("%s ", bits);
        }
        printf("\n");
}

int main(void)
{
        uint8_t c1 = 0xff, c2 = 0x44;
        uint8_t c3 = c1 + c2;

        printb(c1);
        printb((char) 0xff);
        printb((short) 0xff);
        printb(0xff);
        printb(c2);
        printb(0x44);
        printb(0x4411ff01);
        printb((uint16_t) c3);
        printb('A');
        printf("\n");

        return 0;
}

Output

$ ./printb 
11111111 
11111111 
00000000 11111111 
00000000 00000000 00000000 11111111 
01000100 
00000000 00000000 00000000 01000100 
01000100 00010001 11111111 00000001 
00000000 01000011 
00000000 00000000 00000000 01000001 

I have used another approach (bitprint.h) to fill a table with all bytes (as bit strings) and print them based on the input/index byte. It's worth taking a look.


I've actually had the crashing issue with VLAs on my favorite embedded compiler, when using a hardware vendor's library. Some people would argue I should just use gcc or clang, but those offer no setting other than -O0 which will refrain from making unsound optimizations (such as assuming that if a compiler would not be required to accommodate the possibility of p1 being used to access some storage within some context, and the compiler can show that p1 and p2 will be equal, it may ignore the possibility of p2 being used to access that storage).
M
Martijn Courteaux
void print_ulong_bin(const unsigned long * const var, int bits) {
        int i;

        #if defined(__LP64__) || defined(_LP64)
                if( (bits > 64) || (bits <= 0) )
        #else
                if( (bits > 32) || (bits <= 0) )
        #endif
                return;

        for(i = 0; i < bits; i++) { 
                printf("%lu", (*var >> (bits - 1 - i)) & 0x01);
        }
}

should work - untested.


B
Bo Persson

I liked the code by paniq, the static buffer is a good idea. However it fails if you want multiple binary formats in a single printf() because it always returns the same pointer and overwrites the array.

Here's a C style drop-in that rotates pointer on a split buffer.

char *
format_binary(unsigned int x)
{
    #define MAXLEN 8 // width of output format
    #define MAXCNT 4 // count per printf statement
    static char fmtbuf[(MAXLEN+1)*MAXCNT];
    static int count = 0;
    char *b;
    count = count % MAXCNT + 1;
    b = &fmtbuf[(MAXLEN+1)*count];
    b[MAXLEN] = '\0';
    for (int z = 0; z < MAXLEN; z++) { b[MAXLEN-1-z] = ((x>>z) & 0x1) ? '1' : '0'; }
    return b;
}

Once count reaches MAXCNT - 1, the next increment of count would make it MAXCNT instead of zero, which will cause an access out of boundaries of the array. You should have done count = (count + 1) % MAXCNT.
By the way, this would come as a surprise later to a developer who uses MAXCNT + 1 calls to this function in a single printf. In general, if you want to give the option for more than 1 thing, make it infinite. Numbers such as 4 could only cause problem.
t
the Tin Man

Here is a small variation of paniq's solution that uses templates to allow printing of 32 and 64 bit integers:

template<class T>
inline std::string format_binary(T x)
{
    char b[sizeof(T)*8+1] = {0};

    for (size_t z = 0; z < sizeof(T)*8; z++)
        b[sizeof(T)*8-1-z] = ((x>>z) & 0x1) ? '1' : '0';

    return std::string(b);
}

And can be used like:

unsigned int value32 = 0x1e127ad;
printf( "  0x%x: %s\n", value32, format_binary(value32).c_str() );

unsigned long long value64 = 0x2e0b04ce0;
printf( "0x%llx: %s\n", value64, format_binary(value64).c_str() );

Here is the result:

  0x1e127ad: 00000001111000010010011110101101
0x2e0b04ce0: 0000000000000000000000000000001011100000101100000100110011100000

this is not C, its crappy OOP using c++
w
wnoise

No standard and portable way.

Some implementations provide itoa(), but it's not going to be in most, and it has a somewhat crummy interface. But the code is behind the link and should let you implement your own formatter pretty easily.


M
Marko

I just want to post my solution. It's used to get zeroes and ones of one byte, but calling this function few times can be used for larger data blocks. I use it for 128 bit or larger structs. You can also modify it to use size_t as input parameter and pointer to data you want to print, so it can be size independent. But it works for me quit well as it is.

void print_binary(unsigned char c)
{
 unsigned char i1 = (1 << (sizeof(c)*8-1));
 for(; i1; i1 >>= 1)
      printf("%d",(c&i1)!=0);
}

void get_binary(unsigned char c, unsigned char bin[])
{
 unsigned char i1 = (1 << (sizeof(c)*8-1)), i2=0;
 for(; i1; i1>>=1, i2++)
      bin[i2] = ((c&i1)!=0);
}

a
andre.barata

Here's how I did it for an unsigned int

void printb(unsigned int v) {
    unsigned int i, s = 1<<((sizeof(v)<<3)-1); // s = only most significant bit at 1
    for (i = s; i; i>>=1) printf("%d", v & i || 0 );
}

Just noticed this is quite similar to @Marko solution
Is there any way to limit the bitsize of the output?
@Remian8985 yes, the s variable holds the number of bit that will be output. "(sizeof(v)<<3)" is basicaly the size of the input variable in bytes (4 in case of int) then "<<3" is the same as multiply by 8, to get the number of bits to print
l
luart

One statement generic conversion of any integral type into the binary string representation using standard library:

#include <bitset>
MyIntegralType  num = 10;
print("%s\n",
    std::bitset<sizeof(num) * 8>(num).to_string().insert(0, "0b").c_str()
); // prints "0b1010\n"

Or just: std::cout << std::bitset<sizeof(num) * 8>(num);


That's an idiomatic solution for C++ but he was asking for C.
S
SarahGaidi

My solution:

long unsigned int i;
for(i = 0u; i < sizeof(integer) * CHAR_BIT; i++) {
    if(integer & LONG_MIN)
        printf("1");
    else
        printf("0");
    integer <<= 1;
}
printf("\n");