Components of Computing Devices


Components of Computing Devices

Information system paper


This week we are going to talk about some of the primary components of computing devices.




Week 3 – Components of Computing Devices – Binary Numbers




Before we take a look under the hood in the computer, I would like to talk to you a little about data representation in the computer.


ASCII First character encoding Defined 128 different standard    alphanumeric characters that could be used on the internet.


Supports numbers (0-9), English letters (A-Z)


some special characters like ! $ + – ( ) @ < > .


ANSI Original Windows  Supported 256 different (Windows-1252) character set character codes.


ISO-8859-1 Default character set It also supported 256 for HTML 4            different character codes.


UTF-8    Default character set Supports almost all of the (UNICODE)         for HTML 5          characters and symbols in the world.


Week 3 – Components of Computing Devices – Meta Character Sets


If you have studied HTML, you will have noticed this required HTML 5 directive: <meta charset = “utf-8”> HTML 5 validators will issue an error if this statement is missing from the program. Here is what is going on with the statement. UTF-8 is the default character set for HTML 5. Character sets are encoding standards. I would like to talk with you for a few minutes about what that means.


When the first computers were designed, designers realized that there would have to be a highly standardized mechanism for determining what a computer thinks when you press a certain key on the keyboard, or a certain character is read into memory. If this mechanism didn’t exist, computers wouldn’t be able to share data. So, one of the very earliest standards for computers that was created was the ASCII character coding mechanism. ASCII stands for “American Standard Code for Information Interchange”. ASCII codes involved a 7-bit byte, which means that they could define 128 different alphanumeric characters. Other character sets that have been created are listed above, including UTF-8.


In order to understand character codes, you need to understand character representation in a computer. So, let’s look at a few things.






Week 3 – Components of Computing Devices – Binary Numbers


What number are you looking at here? If you said, “ten”, you would be correct. However, you would also be correct if you said “two” or if you said “sixteen”. So, what are we talking about?






Week 3 – Components of Computing Devices – Binary Numbers


I am talking about the base that the number is represented in. We tend to think in Base 10, so the number on the slide looks most like a “ten” to us. But “10” isn’t equal to “ten” just because it looks like “10”. Each digit in the number on this slide has a meaning, and the meaning relates to two things: the base of the number, and the position of the digit. If we are thinking about Base 10, this number is equal to “ten” because there is a “1” in the 10’s place in the number.


To figure out the value of any decimal (base 10) number, you just add up the digits, taking into account the digit’s place in the number string. So, the number “10” is equal to 0 x 10 to the zeroth power + 1 x 10 to the first power. Any number to the 0th power is equal to 1, so this number is equal to 0X1 + 1X10, or 10. Before we move on to the next slide, note that in Base 10 notation, there are 10 digits we can use before we have to move to the next place.






Week 3 – Components of Computing Devices – Binary Numbers


If we are working in base 2 (binary) notation, our number becomes equal to: 0x2 to the 0th power + 1×2 to 1st power, or 0*1 + 1*2, or 2! The same algorithm comes up with a totally different value because we are thinking in a different base. Note that in binary notation, we only have 2 numbers to work with before we have to move to the next place.






Week 3 – Components of Computing Devices – Binary Numbers


Now that you are refreshed on notation, I am sure you realize that in base 16 (Hexadecimal) notation, our number is equal to: 0x16 to the 0th power + 1×16 to the 1st power, or 0*1 + 1*16, or 16! Since we only have ten digits available in our Arabic numbering system – how in Arabic numerals can we represent 15? The answer is “F”. We add six alphas from our alphabet to represent the missing six numerals. “A” equals 10, “B” equals 11, and so forth. You have most likely seen the results of this in looking at addresses in your computer – now you will understand that the addresses are expressed in Hex!




Week 3 – Components of Computing Devices – Binary Numbers


So, what does this have to do with us? Here we are again, back to vacuum tubes.




Week 3 – Components of Computing Devices – Binary Numbers


ENIAC contained 18,000-20,000 vacuum tubes. Numbers in ENIAC were stored in decimal, not binary, notation, in an attempt to minimize the number of circuits required. This involved using a series of circuits called “ring counters”. The approach didn’t really work out however, and virtually all digital computers since then are binary in design. The vacuum tube is a great way to demonstrate the idea behind binary representation, because the vacuum tube is essentially a binary device.




Week 3 – Components of Computing Devices – Binary Numbers


Let’s build an extremely crude binary device with a standard keyboard. Let the absence of power represent a “0” and the presence of power represent a “1”. Pressing a key on the keyboard will switch power to some tubes and not others. Using a 7 bit byte, what is the biggest number I can represent with binary notation using this mechanism?




Week 3 – Components of Computing Devices – Binary Numbers


Let’s switch to an 8 bit byte. Now what is the biggest number I can represent?




Week 3 – Components of Computing Devices – Binary Numbers


Now I have two questions for you. Given the states of the eight bits in this byte, what number is represented? What key did I press on the keyboard?




Week 3 – Components of Computing Devices – ASCII Table


The answer to the second question lies in this slide. This is an ASCII chart. It shows the assigned ASCII codes for the characters on a normal keyboard (upper case alphas, lower case alphas, numbers and symbols) in Binary, Octal, Decimal and Hex. Note that this chart is an old one for the seven bit byte. Although this representation will accommodate the normal characters on a standard English keyboard, it is obvious that there are not enough representations available for non-printing characters, and character sets from other languages, industries and disciplines. However, the chart does let you determine the key I pressed on the keyboard – a Capital A.




Week 3 – Components of Computing Devices – Unicode Table


This partial chart is a table for the encoding mechanism called Unicode. Unicode was designed to address the limitations of ASCII. It incorporates an 8 bit byte, and can combine from one to four bytes to make a character. UTF-8 uses Unicode in 4 byte segments, and therefore can accommodate 1,112,064 characters, close to all the characters that exist.


This slide shows the Unicode chart. Note that the characters on the Unicode chart with values between 0 and 127, are identical to the characters on the ASCII chart.




Week 3 – Components of Computing Devices


Abstract Architecture of all Computers


All computing devices have a similar number of components, although in phones or tablets some of them might be combined into one component.


Moving on to Components, all computing devices, regardless of their shape, size or purpose, share certain required elements.


There needs to be a certain amount of memory. This memory has to be sufficient to accommodate the data that are to be processed and the most basic of the operations necessary to access the functionality to do the processing. A computer’s memory is normally transient and only lasts until the computer is turned off.


A Central Processing unit, which houses special structures necessary for logical and arithmetic processing is required.


Because Primary Memory is transient, there needs to be some mechanism for what we call “Persistent Storage” – a way to store data that persists even when the computer is turned off. Over the years, the Persistent Storage requirement has been filled with punched cards, magnetic tape, floppy disks, compact disks, internal and removable external hard disks, memory sticks, thumb drives, and so forth.


Input devices handlers for mouse, keyboard, touchpad, etc are required as well.


Output device handlers for Display Screen, Printer, etc are required.


There can be additional components such as cameras, microphones, speakers, and so forth.




Week 3 – Components of Computing Devices


The motherboard is the platform in the computer where the components are laid out and connected together. My experience has been that learning something in our industry is greatly assisted by two things: Having a need to know whatever it is, and learning the acronyms associated with it. Here I am presenting three different images of a motherboard that I was able to find which were labeled. It is fun to compare them, and see which elements they all have in common.




Week 3 – Components of Computing Devices




Week 3 – Components of Computing Devices




Week 3 – Components of Computing Devices – Terminology


On this slide and the next one, I have compiled a couple dozen terms with their definitions, that relate to things you will find on the motherboard. I have found that reading the glossary helps to understand the images of the motherboard, and studying the images helps to understand the acronyms.




Week 3 – Components of Computing Devices – Terminology




Week 3 – Components of Computing Devices – John Von Neumann


This is a name you have probably heard. John von Neumann was a child prodigy, born János Neumann in 1903 to wealthy parents in Budapest, Hungary. Von Neumann was a genius mathematician who by 1929 had published thirty-two major papers. His published work, which is prodigious, spans the fields of mathematics, physics, quantum mechanics, economics and computer sciences.


IN 1929 he was invited to lecture at Princeton University in Princeton, New Jersey. In 1933 he was offered a lifetime professorship on the faculty of the Institute for Advanced Study, where he remained as a mathematics professor until his death. The Institute for Advanced Study (IAS) is an independent, postdoctoral research center based in Princeton, New Jersey, where von Neumann joined such illuminati as Albert Einstein and Kurt Gödel. In 1937 he became a naturalized citizen and Anglicized his first name to John, adding the “von” to it from his father’s title. He worked extensively as a consultant to the US government’s defense projects, including the Manhattan Project, and is credited with the equilibrium strategy of mutual assured destruction.




Week 3 – Components of Computing Devices – John Von Neumann


Just a few of the many accomplishments that Von Neumann is credited with in the field of Computing are:


Inventing a sorting algorithm called the merge sort algorithm,


Describing a new architecture for EDVAC, the sequal to ENIAC,


Working on game theory and the philosophy of artificial intelligence with Alan Turing,


Contributing to the development of the Monte Carlo method (which allowed solutions to complicated problems to be approximated using random numbers),


Developing a format for making pseudorandom numbers, the middle-square method, and


Doing pioneering work in the field of cellular automata.




Week 3 – Components of Computing Devices – Von Neumann Architecture


In 1945, while consulting for the Moore School of Electrical Engineering at the University of Pennsylvania, von Neumann wrote a paper titled “First Draft of a Report on EDVAC”, which described modifications to ENIAC to run as a stored-program machine. EDVAC was being planned while ENIAC was being completed, and in his paper Von Neumann proposed a different plan for EDVAC. The plan described a computer architecture in which the data and the program are both stored in the computer’s memory in the same address space, as opposed tothe earliest computers that were “programmed” using a separate memory device such as a paper tape or wired board.


This architecture, called The Von Neumann Architecture, has endured as a basic plan for computers ever since. It is the basis for most modern computer designs.




Week 3 – Components of Computing Devices


Programming ENIAC required flipping switches and rewiring some of its parts: although this was relatively efficient, it could often take a quarter hour to reprogram ENIAC for a computation that only took thirty seconds to run. In contrast, EDVAC read in some kind of input that encoded its operations, stored them in memory, and executed from there. Eckert had proposed the idea almost a year earlier, but von Neumann seized upon it and became its champion.




Week 3 – Components of Computing Devices – Von Neumann Architecture


This is another picture of the Von Neumann architecture, with some more details filled in.




Week 3 – Components of Computing Devices – Von Neumann Architecture


The control unit of the Central Processing Unit regulates and integrates the operations of the computer. It selects and retrieves instructions from the main memory in proper sequence and interprets them so as to activate the other functional elements of the system at the appropriate moment to perform their respective operations. All input data are transferred via the main memory to the arithmetic-logic unit for processing, which involves the four basic arithmetic functions (i.e., addition, subtraction, multiplication, and division) and certain logic operations such as the comparing of data and the selection of the desired problem-solving procedure or a viable alternative based on predetermined decision criteria.




Week 3 – Components of Computing Devices – Von Neumann Architecture


The Arithmetic Logic Unit (ALU) allows the computer to add, subtract, and to perform basic logical operations such as AND/OR.




Week 3 – Von Neumann Architecture


Central Processing Unit – Arithmetic Logic Unit
















Operations like this in the ALU utilize what we call logic gates. These logic gates work by taking two inputs (one input for the ‘NOT’ gate) and producing an output. If we consider the ‘AND’ gate the output will be true, or ‘1’ (or a high voltage), if input #1 and input #2 are true, and the output will be false, or ‘0’ (or a low voltage), if one or both inputs are false. Likewise, if we consider the ‘OR’ gate the output will be true if input #1 or input #2 is true. The ‘XOR’ gate output will be true if either input is true, but false if both inputs are true; this is an implementation of the exclusive ‘OR’ logic operation. The ‘NOT’ gate will output the opposite of the input; so if the input is true the ‘NOT’ gate’s output will be false. The ‘NAND’, ‘NOR’, and ‘XNOR’ gates are implementations of the ‘AND’, ‘OR’, and ‘XOR’ gates respectively with a ‘NOT’ gate prior to the output; so a ‘NAND’ gate will return what an ‘AND’ gate does not. All of this should sound very familiar to you – remember truth tables?






Week 3 – Von Neumann Architecture


Central Processing Unit – Arithmetic Logic Unit




4 bit AND


These logic functions are by themselves an important part of a CPU’s functionality, but performing logic operations on two inputs is only so useful. By combining these gates together we can have devices with more inputs. For example, you can combine three ‘AND’ gates. These three ‘AND’ gates will produce an output that is true only when all four inputs are true. In essence, this is a 4 bit ‘AND’ gate. You can extrapolate from this and form an 8 bit ‘AND’ gate by combining two 4 bit ‘AND’s and one 2 bit ‘AND’.




Week 3 – Von Neumann Architecture


Central Processing Unit – Arithmetic Logic Unit




Half Adder


This gate is called a half-adder, and for it, the inputs are not true or false but ‘1’ or ‘0’. The output of this adder is the sum of the inputs with a carry bit. If the inputs are ‘1’ and ‘1’ we are adding 1 plus 1. The output labeled ‘SUM’ is just an ‘XOR’ of the inputs which will be ‘0’. The output labeled ‘CARRY’ is an AND gate which of course will be ‘1’. The addition answer therefore is 10 which is the binary addition of ‘1’ and ‘1’. If the inputs are ‘1’ and ‘0’ the ‘SUM’ will be ‘1’ and the ‘CARRY’ will be ‘0’, giving an answer of 01 or just 1.




Week 3 – Von Neumann Architecture


Central Processing Unit – Arithmetic Logic Unit


Full Adder


The full-adder is two half-adders with one additional ‘OR’ gate. To use a full-adder to add two binary numbers of arbitrary size you will begin with the right most bit, called the least significant bit (LSB) of each number with a carry in bit of ‘0’. You would then add the two bits, record the sum, and use the carry out bit as the carry in bit when adding the next two bits and moving towards the most significant bits (MSB). By repeating this process you can add two binary numbers of any arbitrary length. This process is known as a ripple carry.




Week 3 – Von Neumann Architecture


Central Processing Unit – Registers


Registers hold instructions and other data. Registers supply operands to the ALU and store the results of operations.


A register may hold an instruction, a storage address, or any kind of data (such as a bit sequence or individual characters). Some instructions specify registers as part of the instruction. For example, an instruction may specify that the contents of two defined registers be added together and then placed in a specified register.


A register must be large enough to hold an instruction – for example, in a 64-bit computer, a register must be 64 bits in length. In some computer designs, there are smaller registers – for example, half-registers – for shorter instructions. Depending on the processor design and language rules, registers may be numbered or have arbitrary names.


A processor typically contains multiple index registers, also known as address registers or registers of modification. The effective address of any entity in a computer includes the base, index, and relative addresses, all of which are stored in the index register.


A shift register is another type. Bits enter the shift register at one end and emerge from the other end. Flip flops, also known as bistable gates, store and process the data.




Week 3 – Components of Computing Devices – Gordon Moore


Let me introduce you to an interesting prophet of our industry: Gordon Moore was the cofounder, with Robert Noyce, of Intel Corporation.


He is famous for Moore’s law that predicted: “The number of transistors per silicon chip doubles each year.” Another way of expressing this prediction would be to say that computing would dramatically increase in power, and decrease in relative cost, at an exponential pace.


In 1975, as the rate of growth began to slow, Moore revised his time frame to two years.


In actuality, over roughly 40 years from 1961, the number of transistors has doubled approximately every 18 months, but this does not really detract from Moore’s prescience in understanding the explosive nature of the growth in power that would occur.




Week 3 – Components of Computing Devices – Gordon Moore


Looking at this chart, you can see the exponential growth, but to actually track this growth turns out to be complicated.




Week 3 – Processor History


On the next two slides you can see the most comprehensive list of Processors I could find. It is hard to figure out exactly when the doubling occurs from this chart because it isn’t sorted right and because there are other factors that enter into the transistor count such as multiple core processors, but the one thing that is unmistakable from the chart is the reality of massive increases in computing power since the 70’s!




Week 3 – Processor History




Week 3 – Processor History


I have read a lot of articles expressing the opinion that Moore’s law will soon cease to operate, although Intel doesn’t seem to agree with this opiinion. Several reasons have been proferred for these opinions, such as:


Clock speeds are slowing down due to heat.


Design of the chips is becoming increasingly expensive.


Manufacturing facilities for chips are becoming prohibitively expensive.


Eventually, the width of transistors would be such that it would be unlikely for them to operate reliably.


Power usage increases with increased packing of transistors onto a chip.


As the transistors are packed more tightly, dissipating the energy that they use becomes harder and harder




Week 3 – Components of Computing Devices – Transistors and Semiconductors


Transistors are made by combining semiconductors in various ways. Semiconductors, most commonly created with silicon, are used everywhere in electronics, and are still indispensable to our industry. Semiconductors are made by a process called “doping”, which means adding different types of impurities to silicon. Based on the type of impurity added, the silicon exhibits different degrees of conductivity. Putting this doped silicon together in various ways results in electrical devices that perform various switching activities. I am assigning a good article for this week that explains how all of this happens. I hope you enjoy the reading, and have a great week!





















Engage Quality Experts

Excellent Client Support

Get it before Your Deadline

You Need a Professional Writer To Work On Your Paper?

Privacy Policy


Do you have an Urgent Assignment?




Disclaimer: The services that we offer are meant to help the buyer by providing a guideline.
The product provided is intended to be used for research or study purposes only.

©2005-2023  All rights reserved