While unsigned integers can be tricky to work with, they do have legitimate uses. Here are some real-world scenarios where unsigned integers make sense:
When you're counting things or working with sizes, the number can never be negative. For example, you can't have -3 players in a game. However, we typically still use regular integers for these because the benefits rarely outweigh the risks.
When working directly with computer hardware, unsigned integers are often required because they map directly to how the hardware works. For example, memory addresses are always positive numbers.
When working with file sizes or network data, unsigned integers are commonly used because these values are never negative:
#include <iostream>
using namespace std;
int main() {
// File sizes are never negative
uint64_t VideoFileSize{1'500'000'000};
// Data amounts are never negative
uint32_t BytesDownloaded{750'000};
cout << "Downloading " << BytesDownloaded
<< " bytes out of " << VideoFileSize;
// This would make no sense - a file
// can't be -1000 bytes!
int NegativeFileSize{-1'000};
}
Downloading 750000 bytes out of 1500000000
When doing binary operations (working directly with bits), unsigned integers are often preferred because they have well-defined behavior:
#include <iostream>
using namespace std;
int main() {
// Using binary literal
uint8_t Flags{0b00000101};
// Each bit can represent a yes/no setting
// First bit
bool IsHappy = (Flags & 0b00000001) != 0;
// Third bit
bool IsFlying = (Flags & 0b00000100) != 0;
cout << "Character is happy: " << IsHappy
<< "\nCharacter is flying: " << IsFlying;
}
Character is happy: 1
Character is flying: 1
However, unless you're doing one of these specialized tasks, it's usually better to use regular signed integers. They're safer because:
The general rule is: use signed integers by default, and only use unsigned integers when you have a specific reason to do so and understand the implications.
Answers to questions are automatically generated and may not have been reviewed.
Explore how C++ programs store and manage numbers in computer memory, including integer and floating-point types, memory allocation, and overflow handling.