Fred is a writer, lecturer and computer scientist who lives in Ubon Ratchathani.
The CPU (Central Processing Unit) is the real ‘brain’ in every computer. The way they are manufactured is by a process called photolithography, which is actually based on two very old technologies.
The first old technology CPU manufacturers innovated was photography. Invented in 1839, the first practical photograph was the Daguerreotype, with an image on a mirror-like silver coated sheet of copper.
A computer science lithograph is simply a type of photograph that is printed on silicon wafers.
One definition of lithography is: “a method of printing from a flat surface (such as a smooth stone or a metal plate) that has been prepared so that the ink will only stick to the design that will be printed” (from Merriam Webster online at merriam-webster.com).
Now, substitute the word “copper” for “ink” and you start to get the idea. The lines that are ‘printed’ onto the very pure industrial grade silicon are made of copper, and conduct electricity.
The computer chips are multiple layers of lithographs, all developed onto a single silicon wafer.
Unlike photography, which is an image processed once onto a negative, photolithography allows literally dozens of exposures to a single wafer, with one manufacturer claiming 40 exposures for a quad-core processor.
For each layer, a large physical model called a mask is created. The term large here is relative. A single mask is about the size of a satang. The layers of stacked conductive material form a lattice that looks like a multi-tiered Bangkok Highway Plan (that actually works).
This brings us to the second old world technology that is used in photolithography. It is the glorious and humble telescope, whose inventor is…unknown.
We do know the time frame it was developed, being sometime in the mid to late 1500’s, when lens manufacturing had advanced enough for some curious eyeglass maker to hold up two lenses and see the result.
Historians give a Dutchman, Hans Lippershey, the credit for inventing the telescope because he was the first one to apply for a patent.
Think of poor Marconi, given credit for inventing the radio long after he was dead.
Weird things happen when History, Science, and Law get into a three way collision, and the aftermath might not resemble truth at all.
Instead of using light to make an image larger, like a telescope or telescopic camera lens, the principle is used in reverse – a physical object is reduced in size by shining a light source through the lens or lenses to make it smaller, just like looking through the wrong end of a telescope.
It’s the ultimate reverse engineering.
This inverse use of old technology is the entire principle behind much of electronic miniaturization. The specific and categorical explanation of silicon photolithography is devilishly complex.
Yes, the light is usually UV laser light, and the lens is sometimes not even physical (magnetic fields can refract the flow of photons), and there are mind-numbing statements like ‘the preferred method of applying the adhesion promoter is by subjecting the substrate to HMDS vapor’ — but a spade is still a spade.
It’s an intricate picture, undoubtedly the most intricate picture in human history, but it’s still a picture, an image.
Photolithography as a manufacturing process for CPUs has real physical limitations. In many chips the electrons are lining up one at a time to queue through the micro circuitry.
Unless someone figures out a way to saw an electron in half, most analysts agree we are going to see much slower advances in CPU performance than the whiplash rate of progress we saw during the last twenty years.
That is why we are hearing mumbles about ‘quantum’ processors and ‘virtual’ CPUs. Intel and AMD, even if they combined their assets, still could only be window shopping for these two theoretical products — neither of which can be made commercially available at this point in time, or at any time in the foreseeable future.
The future isn’t what it used to be.
And we will, of course, be watching a work-in-progress as CPU chips advance, and innovative human minds have a wonderfully creative habit of surprising everyone, including themselves.
Maybe one day we will find a cure for both the Microsoft Windows Blue Screen of death and politicians.
I am not holding my breath, but I am keeping my eyes and mind as open as I can.
In conclusion, here is a funny fact most people don’t know, and this happens after the CPUs are made. They are tested for speed. No, that’s not the funny part.
The funny part is next.
When you see two CPUs from a company that are in a single class, like the sparkly Intel i7-4_ _ _ _ series, where one CPU has a higher clock speed of 4 GHz, and the other is rated at a lower 3.4 GHz. The lower clock speed chip failed the Quality Assurance test and was demoted and reduced in price. I said it was funny, not comical.
They set ‘jumpers’ on the chip to reduce its operating speed to a speed that is uniform to other failed chips in the same class, which incidentally raises its operating temperature (not a good thing). It’s better than recycling.
A geek friend and I manipulated the jumpers on several CPUs from the same class — and all three top manufacturers, by the way — and uniformly the lower-rated CPUs then both increased their measurable clock speed and decreased in operating temperature as measured from a temperature probe. It’s called overclocking. Please, do not try this at home. Proceed cautiously at your own risk. It’s just a funny fact no one laughs about.
It is very curious to me that this devaluation is almost never discussed in trade papers, but it’s no mystery really. Lots of advertising dollars are floating around. And PC component manufacturers all advertise in…PC magazines.
I can imagine someone in the marketing department of a major CPU manufacturer reading this and coughing up his venti iced skinny hazelnut macchiato, sugar-free syrup, extra shot, light ice, no whip…coffee.