These data types are universal. however, some languages will interpret them contextually. Boolean - Used for true/false statements - Only has two possible values which may be represented as a number or word + 0 / FALSE + 1 / TRUE Interger - Only whole numbers, no decimals - Sometimes called a short or dword - 4 Bytes in length, 16-bit - Two types: + Unsigned short: 0 to 65535 // 0xFFFF + Signed short: -32,767 to 32,767 // 0x7FFF - Often declared as: + int x + interger x + short x + unsigned short x + signed short x Long Interger - Denoted by placing "long" before interger in place of short - Also called a "double" - Whole number greater than or equal to a standard interger - Must be 32-bits or 8 bytes in length - Range varies greatly per language Floating Point (Float) - Used to represent approximations of real numbers with precision - Uses decimals and often scientific notation Char[n] - Char refers to characters, often used to represent ascii symbols and strings. - a single char is 1 byte in lenght. + char a = "a" // "a" in this example translates to the byte 0x60 - When a numerical value is added to the char, it becomes a string with that many bytes. + char[10] = "abcdefghij" // this string is 10 bytes in length: 0x 60 61 62 63 64 65 66 67 68 69 6A - Many high-level languanges simply call char "string" and will automatically intrepret the length