Human-written article

Primitive Data Types for C# Unity Devs

Primitive Data Types for C# Unity Devs

TL;DR: Primitive data types are the first thing I teach on my Learn Unity Roadmap. Data is information the computer stores and moves. Types tell the computer what that data represents. C# is strongly typed, so types are explicit, checked at compile time. Python and JavaScript check types at runtime, which gets error prone in larger games.

In most programming languages, there are primitive data types and non-primitive data types. In this article, we will focus only on the primitive ones. If you have never programmed in your life, they are a great starting point. This is probably the first thing you will learn in any course, class, or tutorial. Even in my Learn Unity Roadmap, it is the first thing you are guided to after the introductory node. It is completely free, and it is not even a freemium thing, so feel free to use it. You don't even need to register.

What are primitive Data Types?

Before we can understand them, let's try to understand the word data in just plain English.

What is data?

Well, data is just computer information that computers can modify, move, and store. Think of things like your name or your age that you have to enter when you are creating an account. That is the data that a program requires from you to process it.

What is a data type?

I've already given you an example of name and age. We humans, know the difference, but our computer does not. So that is where types come into play. They explicitly tell the computer what type is name and what type is age. Some programming languages don't have explicit data types, so they let computers figure that out, and that is why languages like TypeScript are becoming super popular. But, this is not about TypeScript, but about C#, and C#(CSharp) is a hard-typed language, and its data types are explicit.

You can write implicit types in C# as well, using the Var keyword, which we will cover, but still, the computer will know the type before compiling anyway. It's not like in JavaScript or Python, where types are checked at runtime, which can be error-prone, especially in larger games or applications.

This doesn't matter to you that much if you are a complete beginner. The only thing you need to remember there are strongly typed languages like C, C++, C#, or Golang. And there are dynamically typed languages like JavaScript or Python.

Back in the day, dynamically typed languages had an advantage over strongly typed languages because they didn't need to compile. And at the time, compiling would take like 30 minutes, and devs didn't have that much time, so dynamic languages were popular. But nowadays, compilation times are minimal, and that is why I mentioned that TypeScript is becoming the most used language nowadays. It even has Type in its name.

Yes, I know this is a C# tutorial, and I keep talking about other languages, but in my opinion, just knowing something about other languages, too, is a skill in itself. And nowadays, with the advancement of AI, it is worth exploring other languages and ecosystems. Days of being good in just one language are gone.

What types exist in C#?

I always like to start with big 4.

bool

int

float

char

These four bool, int, float, and char are the building blocks of C#. Of course, there are more, but they just build upon these when you are looking for precision and accuracy. What I am about to say is an unpopular opinion, but you can build most basic games with just these. Is it practical? No. But could you do it? Yes. Do I recommend it? No.

Think of them as atoms in chemistry. They are the smallest unit of data, and there is nothing smaller than them, they cannot be broken down, and when you combine them, you get something.

Imaging int contains player health, bool checks if the player is alive, float is player speed, and a collection of char is the player's name. Basically, that is an entire game loop.

In most tutorials, they like to overcomplicate it, they will tell you to learn interfaces or some other abstract concepts, but in reality, you will get far away with just these four data types. I really like "less is more" phylosophy.

How do we apply them?

There are many ways to do this. And the right answer is "it depends," but let me elaborate further. Each has its own use case, you don't apply them randomly unless you don't know what you are doing. So let's start with an int and a float.

int is a whole number, think of it like a number 1, 10, or 333. It is as simple as that. You would use an int for things like a person's age. So, if someone is 33 years old, you will never say someone is old 33.05, you will just use a round number.

float is similar to int, but it is a decimal value. What it means is a precision value. If you are working with data that requires precision, like the distance between two objects, and they are 10.3f meters apart, you have to apply a float. In this example, if you used an int value, then the distance would appear either 10 meters or eleven, depending on rounding. Missing .3f from your number can cause issues in your game. An even better example is time, imagine in a racing game player lost .3 seconds because of a rounding error. So just think about that.

bool is extremely important and is the smallest unit of the four. It is a condition for your program. It tells the program where to go next. It is basically like an on/off switch, a gate that directs the flow. It is used almost everywhere all the time. You can use it to check if the player is dead or alive, or if the race has finished.

char represents exactly one single symbol. This can be a letter, a number, or even a space. In C#, we wrap a char in single quotes, like this: 'A' or '?'. The thing about them is that you are not going to use them as often. You will use the string type way more. A string is just a collection of chars. And the reason we are not covering strings in this tutorial is because string is a non-primitive data type. It is a collection or an array of characters.

This is what a string looks like, so altogether those chars make the sentence, which is my name, Darko. A sequence of characters

['D'],['a'],['r'],['k'],['o']

Examples

Below you will find a simple example of how we could use it in an actual game. I created a PlayerController.cs class and added primitives to the field area. As you can see, we used every primitive in some way. I tried to simulate whether the player is being hit by something and whether it received damage. If its health is below or equal to zero, then he dies. This is a real logic that you would use in a real project. Try to read it row by row and try to understand it.

sing UnityEngine;

public class PlayerController : MonoBehaviour {

    // Our "Atoms" (Fields)
    int _health = 100;
    float _moveSpeed = 10.3f;
    bool _isAlive = true;
    char _rank = 'A';

    // A real method you'd use in Unity
    private void OnDamageTaken(int damage) {
        if (!_isAlive) return; // Don't hit a dead player!

        // 1. Subtract the damage
        _health -= damage; 
        
        // 2. Reduce speed as a penalty (using our float)
        _moveSpeed -= 0.6f;

        Debug.Log($"Player with Rank {_rank} hit! Speed is now: {_moveSpeed}");

        // 3. Check the bool
        if (_health <= 0) {
            _isAlive = false;
            Debug.Log($"Player of Rank {_rank} has been killed.");
        }
    }
}

Explaining memory complexity

I'd like to cover one more thing that is often skipped in most tutorials, and that is memory complexity. Not all types are made equal. They are different in size. Down is a table of sizes for each primitive.

  • bool1 Byte — A tiny light switch (on/off).
  • char2 Bytes — A single letter tile (C# uses Unicode).
  • int4 Bytes — A standard box for whole numbers.
  • float4 Bytes — A standard box for decimals.

As you can see, bool is the smallest unit in size, while int and float share the same size. Why are they the same size? It is because they are the same length in bits, it's just that the floating value is using its bits differently to find its decimal place.

Let me explain bit size and overflow

Let's use int here because it is easier to explain. We talked about sizes for each type in the previous paragraph. This is a thing you will probably never learn, and I consider this to be important.

1 byte is 8 bits. If we multiply 8 by 4, we get 32 or 32bit. You've probably seen somewhere 32bit vs 64bit. That is basically what that is. It is just the length of a value in bits.

In C#, int is a 32bit value, or 4 bytes. Because one bit is used to track if the number is positive or negative, we have 31 bits left for the value. That means the maximum number we can display with int is exactly 2,147,483,647. If we go over the limit, we will cause an integer overflow. The number "wraps around" to the lowest possible negative value, –2,147,483,648, and starts moving toward 0.

You may think that is a large number and that you will never go over it. But think that the US national debt is $39,016,762,910,245, and if you were to use int to display that number, you would not be able to. Even a float would struggle because it would lose precision and start rounding the numbers. The int maximum number is way smaller. Just check the numbers below, and the difference is obvious. The US Debt is much larger.

2,147,483,647
39,016,762,910,245

Signed vs unsigned value types

As said above, int uses one of its bits to switch between negative and positive values. But we don't always need to use a negative value because not everything needs it. That is where unsigned values come into play. If you tell your compiler to explicitly use an unsigned int, that value range will shift. Instead of splitting the bits between negative and positive, all 32 bits are used for positive numbers. That means the maximum value of 2,147,483,647 becomes 4,294,967,295.

Now that the number is greater, but is it enough to store the US debt?

4,294,967,295
39,016,762,910,245

And as you can see, it is still significantly smaller. While this does not solve our issue, it is still worth knowing this. Let me show you how to declare an unsigned int.

private uint _amount = 100000u;

In order to declare unsinged value, you must use a prefix u before the type.

Larger numbers and larger value types

We only covered 4 types so far, and we also mentioned strings briefly. But there are more. Types we talked about are fast and small, and most of the time they are just enough. But at some point, we are gonna need larger numbers for finance. Because we don't want to mess with someones money, and that's why we must learn it!

Whole Numbers (Non-Decimal)

  • int (4 Bytes): This is your default for most whole numbers. It holds up to 2.1 Billion.
  • uint (4 Bytes): Used for positive-only counts or IDs. It holds up to 4.2 Billion.
  • long (8 Bytes): This is the "shipping container" for huge numbers like the US National Debt. It holds up to 9 Quintillion.
  • ulong (8 Bytes): The massive positive-only version of a long, holding up to 18 Quintillion.

Decimal Numbers (Floating-Point)

  • float (4 Bytes): Has about 7 digits of precision. This is the standard for Graphics and Physics in Unity.
  • double (8 Bytes): Has about 15 digits of precision. Best for standard 64-bit math calculations.
  • decimal (16 Bytes): Has about 28 digits of precision. This is the gold standard for Financial and Monetary data because it never rounds incorrectly.

End

I hope this will help you understand primitive data types. In my experience working with students, I found that most beginners skip fundamentals like these and don't ever develop the right programming intuition. In my personal opinion, even though this looks super simple, you must master it. Even in job interviews, they are gonna ask you about this to filter those who skipped fundamentals. It's very commont especially with self-taught developers.

This tutorial is an extension of my Learn Unity Roadmap, and if you are looking for a structured way to learn C# Unity programming, it is completely free. Once you are comfortable with primitives, move on to conditionals, methods, and classes. I wrote a full plan for that order in how to actually learn Unity in 2026 in the age of AI, including where AI fits as a tutor instead of a code writer.