Decimals ( also called Floating point numbers on some architectures) are numbers that can represent fractional parts of a whole number.
In most languages, people refer to Decimals as Floating point numbers and there are several different precisions available.
C-based Languages Edit
The smallest type of Decimal is usually referred to float. A longer name could be Single-precision Floating Point number, but generally float suffices. This type isn't that common The next common type in C-based called Double, depending on the current standard of On some C and C++ compilers there is a a type called a Long Double, mostly used for Windows programming.
Other languages Edit
Some languages have a specific type for Currency. while others have extended versions of the single-precision numbers.
In Database Design Edit
Like programming there are different precisions and it varies based on the database engine. The primary different is you can sent the size of the decimal portion of the number as long as its within limits of the data type.
|Programming||Character - Floating point Number - User-defined Type - Integer|
|Database||Character - Decimal - Integer - Blob|