The encoding works in the following way:

First byte                     Stored values
0xxxxxxx                       7 bits
10xxxxxx  +1 byte              14 bits, value shifted by 2^7
110xxxxx  +2 bytes             21 bits, value shifted by 2^7+2^14
1110xxxx  +3 bytes             28 bits, value shifted by 2^7+2^14+2^21
  ....
11111110  +7 bytes             56 bits, value shifted by 2^7+2^14+2^21+2^28+2^35+2^42
11111111  +8 bytes             full 64 bits, value shifted by 2^7+2^14+2^21+2^28+2^35+2^42+2^56

The values are stored in bigendian to allow lexicographic sorting. The encoding and algorithms are aimed to be optimised for storing shorter numbers.

There is an invalid combination: if the first two bytes are 0xff 0xff.


static inline uint varint_put(byte *p, u64 u);

Encode u64 value u into byte sequence p. Returns the number of bytes used (at least 1 and at most 9).


static inline const byte *varint_get(const byte *p, u64 *res);

Decode u64 value from a byte sequence p into res. The invalid combination is not detected. Returns pointer to the next position after the sequence.


static inline const byte *varint_get32(const byte *p, u32 *res);

Decode at most u32 value from a byte sequence p into u32 res. The invalid combination is not detected. Returns pointer to the next position after the sequence. If the stored number cannot fit into u32, fill res by 0xffffffff and returns p.


static inline int varint_invalid(const byte *p);

Does the byte sequence p code the invalid sequence?


static inline uint varint_put_invalid(byte *p);

Store the invalid sequence. Returns always 2 (2 bytes were used, to be consistent with varint_put).


static inline uint varint_len(const byte hdr);

Compute the length of encoding in bytes from the first byte hdr of the encoding.


static inline uint varint_space(u64 u);

Compute the number of bytes needed to store the value u.