У нас вы можете посмотреть бесплатно Numerical Analysis 1.2. Real Number Representation или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Core Concepts of Real Number Representation Normal Form: The lecturer defines a real number x in a base-B system using the form x=±M×B k , where M (the mantissa) is ≥ 1 and lt B, and k is the exponent [01:44]. IEEE 754 Standard: The video explains the standard specifications for storing numbers: Single Precision (32-bit): Uses 1 bit for the sign, 8 bits for the shifted exponent (E=k+127), and 23 bits for the fractional part of the mantissa [03:39]. Double Precision (64-bit): Uses 1 bit for the sign, 11 bits for the shifted exponent (E=k+1023), and 52 bits for the mantissa [11:49]. Special Symbols: The system includes representations for Zero (all bits zero), Infinity (exponent bits all 1, mantissa bits all 0), and NaN (Not a Number, used for undefined operations like division by zero) [07:20]. Machine Numbers and Errors Machine Numbers: These are the specific real numbers that can be stored exactly in a computer's storage system [14:48]. Handling Non-Representable Numbers: Underflow: If a number is smaller than the smallest representable positive number, it is stored as zero [15:39]. Overflow: If it is larger than the largest representable number, it is stored as infinity [16:00]. Rounding Techniques: Chopping: Simply omitting the extra bits [17:12]. Rounding: Choosing the closest machine number. If a number is exactly halfway between two machine numbers, it rounds based on whether the 23rd mantissa bit is 0 or 1 to balance the cumulative error [20:32]. Error Measurement: Absolute Error: The difference between the exact value and the approximation [25:35]. Relative Error: The absolute error divided by the exact value. The relative error of a floating-point representation is estimated to be at most half of the Machine Epsilon (ϵ m ) [26:37]. 4-Digit Arithmetic Example To illustrate rounding effects, the lecturer uses a "4-digit arithmetic" system [29:35]: Addition: Adding 1.043 and 32.25 results in 33.293, which is rounded down to 33.29 to fit four digits [30:46]. Multiplication: Multiplying these numbers results in a long decimal that must be rounded up to 33.64 because the fifth digit is 6 [31:42].