Consider the following literal character sequences:
65
'A'
8.0
131072.0
Each of these has an internal byte stream of 0000 0000 0100 0001. However, because of the punctuation surrounding these values, the compiler can infer what types they havefrom their context:
65 --> int
'A' --> unsigned char
8.0 --> float
131072.0 --> double
These values are literally typed into our source code and their types are determined by the way in which they are written, or precisely by how punctuation around them specifies the context for their data type.
The internal value for each is constant; it is that same bit pattern. The literal 65 value will always be interpreted as an integer with that value. The literal 'A' value will always be interpreted as a single character. The literal 8.0 value may be interpreted as a float; if it is interpreted as a double, it will have...