Recent Posts

Saturday, Sep 27, 2014
Friday, Sep 19, 2014
Saturday, Sep 13, 2014
Friday, Sep 5, 2014
Tuesday, Aug 26, 2014
Thursday, Aug 21, 2014
Monday, Aug 4, 2014
Wednesday, Jul 9, 2014
Wednesday, Jul 2, 2014
Wednesday, Jun 25, 2014
Friday, Jun 20, 2014
Friday, Jun 13, 2014
Friday, Jun 6, 2014
Wednesday, Jun 4, 2014
Friday, May 30, 2014
Friday, May 23, 2014
Friday, May 16, 2014
Friday, May 9, 2014
Thursday, May 8, 2014
Friday, May 2, 2014
Thursday, May 1, 2014
Thursday, Apr 17, 2014
Wednesday, Apr 16, 2014
Friday, Apr 11, 2014
Wednesday, Apr 9, 2014

Subscribe by Email

Your email:


Current Articles | RSS Feed RSS Feed

Thermal Analysis Devices Just Got More Affordable



Crop Picture resized 600 Image: Finney County in Southwestern Kansas is now irrigated cropland where once there was short grass prairie.  NASA IR image with false color.  Photograph Credit: NASA/GSFC/METI/Japan Space Systems, and U.S./Japan ASTER Science Team

Current Uses of Thermal Analysis Devices

One of the benefits of our space program (apart from TANG®) has been the development of Infrared Detector Technology (IR).  Various thermal analysis cameras that can see from the near IR (around 800-1200 nm) to far IR (8-12 um) depending on their detector technologies have been a part of many public and not so public satellite programs that observe everything from crops, to images of your city, to Homeland Security-related stuff for decades.

The government has pumped money into IR sensor technology through various agencies and we all get to benefit as the results get to market.  We can't get our hands on the super-secret defense cameras yet, but there are some cool new things coming to Amazon real soon. Thermal analysis cameras will soon be available for purchase by consumers.

My Work With IR Cameras

I have worked on IR microscopy and thermal imaging systems and analysis for years in order to see into the workings of semiconductor devices.  The systems I have worked on are complex combinations of high-accuracy motion systems, specialized optics such as Solid Immersion Lens (SIL) technology, and in the case of the most recent system, I architected a full wafer-level prober integrated with the diagnostic tool so that testing could be done at the wafer level.  

Those interested in that system can see a paper I presented at the IEEE Semiconductor Wafer Test Workshop in 2012:

It turns out that silicon is largely transparent (depending on doping) to near IR wavelengths. This allows for some really interesting diagnostic opportunities.  If you could see in the near infrared region and looked at the backside of a chip as it operates you would see what looks like a cityscape at night from space and depending on the magnification of the optics you could see all the way down to a single transistor blinking as it switches.  Such transitions are visible because as a transistor switches it passes briefly through its linear region and emits a few photons of IR energy.

Static, bright spots can be heat signatures from power dissipation like shorts or heavy current draws.  Blinking spots result from the ON-OFF-ON transitions of flip-flops as each transistor slides briefly through its linear region on its way to a stable state.  With the right magnification optics it is possible to zoom in on individual cells and look for logic faults, stuck-at faults and crosstalk effects that result from subtle design rule violations.  If a system adds an IR laser, it can stimulate the circuitry and then changes in operating behavior can be seen.  The world of semiconductor failure analysis (FA) owes a lot to these systems.

The heart of all these systems, from diagnosing bad ICs to seeing bad guys at night from space is the IR camera.  They have always been very expensive (our system camera is in the 10's of thousands of dollars) and in order to get decent S/N on the image they typically need to be cooled.  The best such cameras have traditionally used liquid nitrogen to get the sensor down to around 70K.  One of the big names in IR sensor camera technology in the U.S. is Raytheon.

IR Imaging Comes to Consumers

According to a recent journal publication from Raytheon, new breeds of IR sensors that do not require cooling are becoming available.  Although sensors from Raytheon have traditionally been produced in very expensive, and very low quantities, Raytheon has partnered with Freescale Semiconductor to make these devices in mass quantities.

This means that the consumer can have a useful, lowcost thermal imaging camera system.  Just this week, Seek Thermal, a Santa Barbara-based startup made a $199 IR Camera/Sensor accessory for smartphones available for purchase.  Their website illustrates some intriguing applications for the camera in a consumer environment.

Specialized, high cost IR camera systems will continue to have a place in industry.  When you need to see individual photons and resolve spots down to the sub-micron level, only the most cutting edge camera will do.  For those of us in the industrial world, we can complete the circle by thinking of things to do with a really low cost IR camera in the factory.  For the price of one so-called industrial camera you can perhaps network 20 or so cheap ones and get better results.  Personally, I have a few ideas that I plan to pursue.  Stay tuned.

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

Engineering Consulting Firms and Hollywood Share a Common Bond



Most people would never think that an Engineering Consulting Firm would have anything in common with Hollywood, in reality they both share a common bond in breathing new life into something outdated and making it relevant today.

Godzilla Gets a New Look Thanks to Technology

My wheels began turning on this subject after recently reading in the San Jose Mercury News about a 4K ultra high resolution restoration that is underway in Japan of the Godzilla film franchise.  Although the appeal of an incredibly crisp, fully restored version of the 1954 classic is certainly there for a nerdy baby boomer like me, my takeaway from the article was focused more on the connection I felt towards what they were trying to accomplish.

In the article, the restoration team was quoted as saying that the scanning technology they are using is so good that they are discovering detail and nuance in the original source material that has never been seen since the film was made.  The original transcription and projection technology was just not up to the challenge.  Thus, the amazing depth and contrast resolution, that has been laying hidden in the silver nitride crystals of the old film stock can only been seen (wires and all!) by today's technology.  In essence, it has taken a new team of skilled technical people, armed with new technology to reveal the hidden features and thus breathe new life into a very old product.

I see the film restoration process as an excellent analog to the process that an experienced engineering consulting firm can bring to a company's established products.  It is something that I have been doing as an experienced systems electrical engineer for years.

Engineers are Skilled at Revamping Products

So, why use an outside firm to "restore" an old product?  There are many reasons, but here are a few:

  • Fresh Eyes:  The consulting team can experience the product in a fresh way, and like the film restoration team, uncover the hidden detail and design intent of the product.

  • Different Skillsets and Experience Base:  My personal consulting EE experience range includes electronic design, robotics, wafer probing, surface metrology, infra-red microscopy, cleanroom technology, vacuum transport, and front end systems.  When you add the other SERVICES of my firm to the mix, as a engineering team we are ideally placed to evaluate old implementations and propose new and novel ways to skin the original cat (or dinosaur).

  • No Axe to Grind:  An outside team is not influenced by office politics or the pet projects of an in-house team.  The consulting engineering team can work with in-house resources to get at the original design intent in the same way that the film restoration team uncovers the director's vision.  They can also be objective and provide alternative implementation proposals, often bringing new technology from other fields into play to reduce cost, replace obsolescent designs, add features, and thus breathe new life into the old beast.

I am sure that the restoration team in Japan feels both excited and humble at their great undertaking.  I share those feelings every time my team takes on a new challenge.  It is the reason why I keep at it after many years in the business and look forward to hearing the "roar" of the finished project as it takes on the world-all over again.

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

Materials Science News: 2-D Phosphorus-The Future for Solar Cells?



Like most industries, the semiconductor industry is not impervious to economic high and lows.  After having a few rough years the industry is recovering, and along with this recovery, a wealth of development and development dollars have been spent.  This week materials science researchers announced that 2-dimensional phosphorus could be part of the future for the semiconductor industry.  One theory is that 2-dimensional phosphorus could eventually replace the more commonly used silicon; how will this affect the future of semiconductors?

Silicon in Semiconductors

Silicon atoms (specifically in crystalline form) are able to create perfect covalent bonds with each other.  This means that once the bond is made, the atom does not gain or lose electrons easily.  When four silicon atoms bond with each other they form what is called a lattice.  Pure silicon crystals are naturally an insulator, and do not allow much electricity to flow through it.  It is possible to change the behavior of silicon by doping it.  Doping is when you add a small amount of impurity into the silicon, which destabilize the covalent bonds.  There are two different types of doping that are done to silicon:

  • N-type: (When phosphorus or arsenic are added) creates a good negative conductor

  • P-type: (when boron or gallium are added) creates a good positive conductor

Adding either an N-type or P-type dopant turns silicon from a good insulator into a good (not great) conductor, and therefore creates a semiconductor.  While both the N-type and P-type doping is not novel, when they are together they create a diode (simplest semiconductor device).  A diode allows a current to flow in one direction but not the other.

New Research

While phosphorus is not in the same group as silicon or carbon (see periodic table) [1], materials scientists at Rice University have found it to be a promising candidate for "Nano-electronic applications" that require stability [2].  Now to be clear this is not the common element phosphorus.  Rather it is a "two-dimensional phosphorus, [made] through exfoliation from black phosphorus" [2].  Black phosphorus is believed to be the most stable form of phosphorus.  It is created when phosphorus is put at "higher temperatures about 590 °C and higher pressures" or when phosphorus is combined with a "catalyst at ordinary pressures and a temperature of about 200°C" [3].

Researchers at Rice University compared 2-dimensional phosphorus with other 2-dimensional metal dichalcogenides like molybdenum disulfide because of their inherent conductive properties (metals are natural conductors).  Issues have arisen, however, where these other compounds bond-the point where the elements meet (point defect).  A disturbance is created in the flow of the current.  In doped silicon, this doesn't occur because the negative and positive silicon work together to fill in these gaps therefore eliminating a disruption in flow.  When there are "multiple point defects or grain boundaries-where the sheets of a 2-D material merge at angles" the device is no longer useful [2].

Advantages of Phosphorus

2-dimensional phosphorus does not exhibit the same issues at the point defects that other materials tested experienced.  According to calculations done by theoretical physicist Boris Yakobson and his colleagues at Rice University, the point where 2-dimensional phosphorus point defects or grain boundaries exist, the materials semiconducting properties remain stable.  This transpires at the point defects because "atoms jut out of the matrix, this complexity gives rise to more variations among defects" [2].  Also, 2-D phosphorus bonds with itself, this therefore eliminates the recombining of electrons that occurs between hetero-elemental bonds.  2-dimensional phosphorus is very similar to 3-dimensional silicon because they both don't have issues with band-gap changes at ground boundaries.  The key difference however between the two is that 3-dimensional silicon can change its properties from positive to negative at the point defects, and this does not occur in phosphorus.  Another benefit of 2-dimensional phosphorus is that phosphorus exists in abundance on Earth, and the black phosphorus is relatively easy to make.  No production worthy semiconductor equipment available yet for this material. 

Future of Phosphorus Semiconductors

The researchers at Rice University believe that 2-dimensional phosphorus semiconductors could potentially be used to harvest sunlight in solar cells because their band-gap matches well with the solar spectrum.  Due to the way this new phosphorus responds at the point defects, the materials performance would not deteriorate as it has with other materials tested [2].  This is great news for the solar industry that is constantly looking for new ways to improve their products and make them more durable and efficient. 

2-dimensional phosphorus has already been tested in "high-performing electronics, and has already shown it can be a better transistor than 2-D metal dichalcogenides" [2]. 

So far the future looks bright for the use of 2-dimensional phosphorus in semiconductors instead of silicon.  Semiconductors and their success effects our lives every day without people even realizing it.  Semiconductors are in all of our electronic devices, from our smartphones to the computers in our cars.  Their effectiveness is what keeps us connected in today's technology dependent society.  If phosphorus is the answer to fewer interruptions in our devices, then it will be welcomed with open arms because let's be honest nothing is more upsetting than when your smartphone malfunctions.




For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

3D Printing Part 2: A Consulting EE's views on 3D Printing in Space


Space Image

I am a consulting electrical engineer (consulting EE) and this is part two in my series on 3D printing.  Today I will be discussing the possibility of 3D printing making its way into space, and why I believe it is possible based on my experience with 3D printing.

3D Printer Headed to Space

3D printing seems to be everywhere these days from people's living rooms to Office Depot, and now it seems space is the next stop.  According to NASA they are planning on sending a 3D printer to the International Space Station (ISS).  They have already done some preliminary experiments on the "vomit comet" airplane, which gave enough success to let them move into the next phase of experiments.  Missions 41/42 and 43/44 will be starting in September 2014 and proceeding into 2015.

3D Printing Process

I believe that there is no reason that 3D printing would not work in zero-G.  The process does not depend on gravity-Raw material (filament) is mechanically pushed into a heated chamber which terminates in a nozzle.  It is then extruded in a thin bead and the extruder is moved to lay down the pattern on each layer.  Each successive layer melts into the preceding one and thus sticks where it is placed.

The first layer is the tricky part.  It has to adhere to a bed and this is a universal problem for all 3D printers to solve.  The extruded material has to stick to the bed just enough to hold it in place both during the printing process and also while it cools.  The adhesion has to resist the tendency of the material to shrink as it cools.  Not enough "stick" and the first layer shrinks unevenly on its long axis and curls away from the bed.  Too much "stick" and the unfinished piece cannot be removed from the bed without damage to the piece or bed.

Lots of experimentation is going on to try and achieve a reliable, repeatable bed surface.  There are many hobby solutions and some serious materials science is also happening to find just the right coating for the perfect stick/release surface. 

For more information about a kick starter-funded group that is making some inroads into the solving the problem see the URL:

Benefits of 3D Printing in Space 

However it is achieved, the first layer is extruded onto a bed surface and adheres temporarily, without bonding.  There is a ponteintal advantage to printing in a zero-G environment because the issue of bridging large gaps with molten filament is not a problem.

The traditional issue with gaps is that the extruded material is hanging unsupported as the nozzle travels over a gap.  Imagine a rope suspended over a chasm.  Gravitational forces may cause the viscous material to droop.  The resultant droop in the hot filament material comes about from its viscous state when it leaves the extruder nozzle.  It solidifies as it cools but by then the damage is done and you no longer have straight lines of material over gaps.  In space, this problem goes away, at least in theory.  Perhaps it will be replaced by another problem as the extruder material has some inertia when it leaves the nozzle.  That is something we can learn when the printer gets up there.


For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

3D Printing Part 1: First attempt by a consulting EE

My experience with 3D Printing

I am a consulting electrical engineer (consulting EE), and want to share my first attempt in the world of 3D printing.  Last May I acquired a SeeMeCNC “Orion” delta-style printer at the Maker Faire in San Mateo, California.  Since then I have used three pounds of plastic filament and printed many terrible failures on the road to some beautiful components.  Figure 1 shows an example of a gear from a gear cube that I designed using Solidworks (tm).  The blue part is an early print and is very rough.  The red part was printed after I adjusted the process.  Figures 2-6 show the evolution of a vase throughout the 3d printing process.

early attempts at a complex gear compared too later attempts
 Figure 1: Gears From rough (blue) to smooth (red)

beginning stages of a 3d print of a vase
Figure 2: A 10-hour print run of a vase. 

10 hour 3d print of a vase

 Figure 3: A 10-hour print run of a vase further along in printing.

almost completed 3d print of a vase

Figure 4: A 10-hour print run of a vase almost completed.

3d printing of a vase

Figure 5: A 10-hour print run of a vase.


Figure 6: A completed vase.

The control of 3D Printing

Despite the advances made by countless experimenters, hackers, and hard-core engineers in the field, 3D printing is still in its infancy.  As a hobby, it is comparable to the very early days of personal computers (remember the IMSI 8080?) in which useful results could be obtained but only if you were willing to do a lot of very manual stuff.  As a business, it is not yet plug-and-play, and I have a sneaking suspicion that companies who offer printed parts for hire make a fair bit of scrap that the end customer never sees.  I look forward to more prototyping with 3D printing.

My electrical engineering career has been tied to the semiconductor equipment industry for many years so I am no stranger to process control.  In a semiconductor fabrication factory (FAB), the ability to diagnose, measure, and control fairly complex processes determines ones success.  Tiny variations in gas flow rates, annealing temperatures, etch time, and a hundred other factors can be the difference between a wafer full of pricey graphic processing units (GPUs) and one that is the failure analysis (FA) lab’s worst nightmare.

In my attempt to master the 3D printing process I have had to bring my process control and continuous improvement experience to bear and work out a series of experiments to help me “dial in” my printer.

Deming Circle

Figure 7: The Deming Circle – Classic Continuous Improvement cycle


3D Printing Process Variables 

This may seem like a bit of overkill for a “hobby” but is it ingrained in my electrical engineering DNA and I know that careful planning, with incremental change experiments and careful examination and analysis of the results will yield better and better outcomes.  Good results are all about the process control.

There are many process variables that affect the quality of a 3D print.  Like any real-world system, they are interrelated; no single parameter can be changed without having a ripple effect on other parameters.  I have been experimenting in a careful manner with each parameter, a little at a time and printing and re-printing test models in the same fashion I would for a consulting client.  I have designed simple geometric shapes in computer-aided design (CAD), which stress a particular feature or function of the print.   In future posts I will touch on more of them in more detail, but for now, here are some of the “high nails” of the process.

Extrusion Temperature

There is no real standard for the purity or content of any of the plastic filament available today.  As a result, the melt point, glass transition point, and other physical properties of plastic filament will vary from batch to batch and color to color.  The range can be as much as 20 - 30° C!  Too hot and the plastic will dribble out of the nozzle like a bad head cold and too cool and the extruder motor will be unable to push the filament through the nozzle fast enough to give consistent flow.

Flow Rate

3D printers are “dead reckoning” systems.  They depend on stepper motors to drive filament through a heated extruder nozzle and have to guess at how much plastic is coming out.  Clever software calculates expected flow based on filament diameter and commanded filament speed, but there is no feedback in these systems to make adjustments.

Layer Height vs. Extrusion Diameter

In each printed layer, a ribbon of molten plastic is extruded from a nozzle of given diameter.  Each layer sits on top of a previous layer and is flattened slightly based on the height of the nozzle above the previous layer.  Too close and the layer deforms as it is extruded.  Too far away and it may not adhere to a previous layer.  These things are a major factor in surface finish and strength of the final part.

I have learned a lot over the past few months and continue to learn from experiment and collaboration with the vibrant community of owner/experimenters here in the SF-Bay area and silicon valley.  The RepRaP Wiki is an incredible source for information on the general 3D printing subject.  Presently, my success rate is about 70-80% and thus the experiments continue in between prints of artistic or functional pieces.  This is a journey in which my engineering background compliments my hacking spirit.  More to come in the following series on 3D printing.

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

The Need For Engineering Heroes



Recently IEEE writer, G. Pascal Zachary, wrote an article, Where are Today’s Engineering Heroes?  This article describes the lack of engineering heroes in today’s society.  Not only is celebrating heroes a good way to inspire young people and inform the public it is also necessary.  The lack of heroes negatively affects engineering because it diminishes the enterprise in the public eye and constricts the flow of talent into the field.  In a society that hero-worships rock stars and movie stars, serious fields are lacking serious heroes. 

While many would argue that there are plenty of engineering heroes in today’s society: Hewlett and Packard, Steve Jobs, or Bill Gates, those individuals are celebrated mostly for building huge corporations based on the technology created and developed by many.  Basically, the engineers who earn the most fame make the most money.  So, that would lead others to believe that in order to be a hero you must first amass a fortune.  While Zachary states there is nothing wrong with profiting from your ideas, it shouldn’t be the sole marker for a hero in the industry. 

Zachary believes that engineering may be lacking heroes because many truly do not understand the work of engineers anymore.  When Edison created the phonograph in 1877 everybody could relate to the invention.  However, today when an engineer designs a microprocessor with 2 billion transistors instead of 1.5 billion, your average individual does not understand the significance.  Zachary also believes that engineers face a structural impediment since there is no Nobel Prize for engineering, nor is there an engineering award with similar global status and prestige.  While a few engineers have received the Nobel Prize in other fields, without a Nobel of their own, engineers cannot anoint their heroes in the same way physicists, economists, or authors can.  While engineering does have the Kyoto Prize in Advanced Technology, the Charles Stark Draper Prize of the U.S. National Academy of Engineering, and the IEEE Medal of Honor, none of these awards have the same prestige or are as well-known as the Nobel Prize.  Zachary also believes these awards underscore the abiding stereotype that engineers are solely male.  Only one of the 34 recipients of the Kyoto Prize in Advanced Technology and one of the 47 recipients of the Draper prize has been a woman.  Also, of the 95 people that have received the IEEE Medal of Honor award, non-have been women. 

Zachary questions what it takes to become an engineering hero.  He believes that overcoming adversity – whether personal, institutional, or technological – is a valid criterion.  For example, computer scientist Grace Hopper, developer of the first compiler, beat all three.  She succeeded in a male dominated field and institution while shaping the course of computer programming and reaching the rank of rear admiral in the U.S. Navy.  Contribution to the social and cultural well-being of humanity is another criterion for engineering heroism in Zachary’s eyes.  However, throughout engineering history, people have sought to solve technological issues because they were there, not necessarily because they were considering the greater good.  However, many of these inventions did results in benefits for humanity.  For example, mechanical engineer Jacob Perkins created the first refrigerator.  While his invention was far from the refrigerators we know today, it is because of his work that countless lives were saved.  Before the refrigerator foodborne illness and death were a common headline.  If Jacob Perkins isn’t an engineering hero than I don’t know what is.

Zachary then continues by tackling the question: Can heroism be taught, or is it innate?  He strongly believes that heroes are made, not born.  They learn from their experiences, react to opportunities and setbacks, and when others stay in the safe zone, they reach into the grey area searching for something more.  By reaching into the grey area, engineering heroes achieve “charismatic authority”, or the ability to influence, inspire, and lead others, a phrase coined by German sociologist Max Weber.  Charismatic authority does not just apply to those who gain outsize status through media acclaim.  Charismatic engineers can also work on an intimate level by influencing their peers behind the scenes or by challenging the norm through their inventions or designs.  “The history of engineering is replete with examples of unheralded engineers who refused to accept designs that compromised the public welfare, no matter how profitable they were,” said historian Matthew Hersch. “Inventions like the safety match and the safety bicycle not only worked better than their predecessors, but more ethically. To me, the creators of these technologies are the real heroes.”

The most accomplished engineers have tried and failed many times in their careers.  While many know who Cerf and Kahn are, most have not heard of Louis Pouzin.  Pouzin, the creator of an early packet-switching network called Cyclades, envisioned the democratizing potential of computer networking.  In 1975, Pouzin and Cerf led a group that attempted to get a packet-switching standard adopted by the International Telegraph and Telephone Consultative Committee.  Pouzin publicly criticized the telecom industry’s conservatism and shortly thereafter saw his funding and career opportunities diminish.  Cerf and Kahn utilized aspects of Pouzins’ ideas into the TCP/IP design for the Internet.  Decades later, Pouzin is finally receiving some recognition for his contribution.  None of these engineers worked alone, and their accomplishments occurred in parallel with the efforts of others.

While the engineering community values modesty and suspects that promotion conceals distortion or even fraud, Zachary truly believes that heroes and heroism are essential for engineers to gain respect and acknowledgement for their activities and technological developments.

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

Materials Science Engineering make a more Energy Efficient Fuel Cell



Hydrogen Fuel Cell



While renewable energy sources help to fight the effects of global warming, they do have their drawbacks.  Renewable energy cannot be produced as predictably as plants powered by oil, coal, or natural gas.  Ideally, alternative energy plants would be paired with a huge energy storage system that would store and dispense power.  Stanford School of Engineering is working to use reversible fuel cells to combat this storage issue.  Fuel cells use oxygen and hydrogen to create electricity; if the process were reversed, the fuel cell could be used to also store electricity.

"You can use the electricity from wind or solar to split water into hydrogen and oxygen in a fuel cell operating in reverse," said William Chueh, an assistant professor of materials science and engineering at Stanford and a member of the Stanford Institute of Materials and Energy Sciences at SLAC National Accelerator Laboratory. "The hydrogen can be stored, and used later in the fuel cell to generate electricity at night or when the wind isn't blowing."

Fuel cells are not a perfect solution.  The chemical reactions that cleave water into hydrogen and oxygen or join them together are not completely understood – at least not to the degree necessary to make utility-grade storage systems.  Chueh is working alongside researchers from SLAC, Lawrence Berkeley National Laboratory and Sandia National Laboratories to study the chemical reactions in fuel cells in a new way.  In an article published in Nature Communications, Chueh and his team describe how they observed the hydrogen-oxygen reaction in a specific type of high-efficiency solid-oxide fuel cell.  They also took atomic-scale photos of the process using a particle accelerator called a synchrotron.  This type of analysis is first-of-its-kind and help lead to more efficient fuel cells that could eventually allow for utility-scale alternative energy systems.

Electrons Role

In a traditional fuel cell, a gas-tight membrane separates the anode and cathode. Oxygen molecules are introduced at the cathode where a catalyst fractures them into negatively charged oxygen ions.  These ions then make their way to the anode where they react with hydrogen molecules to form the cell's primary "waste" product: pure water.  To perform these reactions, electrons also need to make the journey.  Normally, the electrons are drawn to the cathode and the ions are drawn toward the anode, but while the ions pass directly through the membrane, the electrons can't penetrate it; they are forced to circumvent it via a circuit that can be harnessed to run anything from cars to power plants.

Because electrons do the designated "work" of fuel cells, they are thought of as the critical functioning component. But ion flow is just as important, said Chueh.

"Electrons and ions constitute a two-way traffic pattern in many electrochemical processes," Chueh said.  "Fuel cells require the simultaneous transfer of both electrons and ions at the catalysts, and both the electron and ion 'arrows' are essential."

Electron transfer in electrochemical processes such as corrosion and electroplating is relatively well understood, Chueh said, but ion flow has remained unclear.  This is due to the environment where ion transfer may best be studied -- catalysts in the interior of fuel cells -- is not conducive to inquiry.

Solid-oxide fuel cells operate at relatively high temperatures.  Certain materials are known to make superior fuel cell catalysts.  Cerium oxide, or ceria, is particularly efficient.  Cerium oxide fuel cells can hum along at 600 degrees Celsius, while fuel cells incorporating other catalysts must run at 800 C or more for optimal efficiency.  Those 200 degrees represent a huge difference, Chueh said.  "High temperatures are required for fast chemical reactivity," he said.  "But, generally speaking, the higher the temperature, the quicker fuel cell components will degrade.  So it's a major achievement if you can bring operating temperatures down."

How Does It Work

While cerium oxide established itself strong catalysts for fuel cells, it is unclear why it works so efficiently.  What were needed were visualizations of ions flowing through catalytic materials.  But putting an electron microscope into the pulsing, red-hot heart of a fuel cell running at full bore isn’t exactly possible.  "People have trying to observe these reactions for years," Chueh said.  "Figuring out an effective approach was very difficult." 

In their Nature Communications paper, Chueh and his colleagues at Berkeley, Sandia and SLAC split water into hydrogen and oxygen (and vice versa) in a cerium oxide fuel cell.  While the fuel cell was running, they applied high-brilliance X-rays produced by Berkeley Lab's Advanced Light Source to illuminate the routes the oxygen ions took in the catalyst.  Access to the ALS tool and the cooperation of the staff enabled the researchers to create "snapshots" revealing just why ceria is such aFuel Cell superior catalytic material: it is, paradoxically, defective.  "In this context, a 'defective' material is one that has a great many defects -- or, more specifically, missing oxygen atoms -- on an atomic scale," Chueh said. "For a fuel cell catalyst, that's highly desirable."

Such oxygen "vacancies," he said, allow for higher reactivity and quicker ion transport, which in turn translate into an accelerated fuel cell reaction rate and higher power. 

"It turns out that a poor catalytic material is one where the atoms are very densely packed, like billiard balls racked for a game of eight ball," Chueh said. "That tight structure inhibits ion flow. But ions are able to exploit the abundant vacancies in ceria. We can now probe these vacancies; we can determine just how and to what degree they contribute to ion transfer. That has huge implications. When we can track what goes on in catalytic materials at the nanoscale, we can make them better -- and, ultimately, make better fuel cells and even batteries."



For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

New Class of Electronic Devices Could Come From 2-D Transistors


innovationEarlier this spring two separate research projects were building transistors made solely from two-dimensional (2-D) materials.  Argonne National Laboratory researchers described a transparent thin-film transistor (TFT) that they had created in the Nano Letters journal.  They used tungsten diselenide (WSe2) as the semiconducting layer, graphene for the electrodes and hexagonal boron nitride as the insulator.  A week later the ACS Nano journal published that researchers from the Lawrence Berkeley National Laboratory had also built an all 2-D transistor that took the shape of a field emissions transistor (FET).  The Berkeley Lab FET used the same materials for their electrode and insulator layers as Argonne’s TFT, but used molybdenum disulfide (MoS2) as the semiconducting layer.

While the fabrication of transparent TFTs made from 2-D materials could lead to flexible displays with super-high density pixels, the impact of an all 2-D FET could potentially have a broader impact.  FETs are nearly omnipresent, being used in computers, mobile devices, and many other electronic devices.

Issues with FETs prior to Berkeley Lab’s work has been that their charge-carrier mobility degrades because of mismatches between the crystal structure and the atomic lattices of the individual components, namely the gate, source and drain electrodes.  These mismatches result in rough surfaces and in some cases dangling chemical bonds.  The completely 2-D FET developed at Berkeley Lab eliminates this issue by creating an electronic device in which the interfaces are based on van der Waals interactions.  These interactions represent all the attractive or repulsive forces between molecules that are not covalent bonds, instead of covalent bonding.  "In constructing our 2D FETs so that each component is made from layered materials with van der Waals interfaces, we provide a unique device structure in which the thickness of each component is well-defined without any surface roughness, not even at the atomic level," said Ali Javey, a faculty scientist in Berkeley Lab's Materials Sciences Division.  He also said that the approach "represents an important stepping stone towards the realization of a new class of electronic devices."  By having interfaces based on van der Waals interactions instead of covalent bonding, it will be possible to reach a degree of control in material engineering and device exploration that has yet to be seen.

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

Thin-Film Solar Cells May Be Toxic Free In The Future.


solar cellCadmium chloride is definitely not healthy to be around.  Its cadmium ions are extremely toxic, and can cause heart disease, kidney disorders, and many other health problems.  It is ironic that such a toxic substance is essential for the manufacturing of clean energy: thin-film cadmium telluride solar cells.  University of Liverpool researchers have discovered a way to work around this however.  They have found that the cadmium chloride can be replaced with magnesium chloride, a safe and inexpensive alternative that could help to decrease the cost and environmental impact of thin-film photovoltaics.  At approximately $0.50 per pound, magnesium chloride is hundreds of times cheaper than cadmium chloride.

This new poison-free process could allow thin-film solar cells to challenge the dominance of silicon photovoltaics, which currently account for approximately 90 percent of the world’s solar market.  There are some major drawbacks with silicon photovoltaics.  They do not particularly absorb light well, so modules require layers of very high purity crystals, each more than 150 micrometers thick.  The price of these silicon slabs is hindering the efforts to reduce the price of solar power.  Thin-film solar cells may be a solution.  By using semiconductors that absorb the sun’s rays more efficiently, similar results can be obtained with sheets of lower purity material that are only 2 micrometers thick.  This results in drastically lower manufacturing costs. 

The leading thin-film technology, which is a sandwich of cadmium telluride and cadmium sulfide (CdTe/CdS), makes up between 5-7 percent of the solar power market.  While the technology is nothing new, CdTe cells have been slow to take off.  However, their efficiency has risen above 20 percent in the lab in the last few years, now only trailing silicon by approximately 5 percent. 

“Now that the efficiency has improved, CdTe can compete commercially with silicon,” says Jonathan Major, a photovoltaics researcher at the University of Liverpool who developed the new magnesium chloride process.  When light hits the boundary region between CdTe and CdS in the cells, it excites electrons that are drawn into the CdS layer (an n-type semiconductor).  As the holes left behind by those electrons fall into the CdTe (p-type) layer, the separation of charge generates a current.  The two layers must be treated with a solution of cadmium chloride or an equivalent to make them function efficiently.  “This process is used by all the [manufacturing] plants,” says Major, and it requires specialized industrial waste processing facilities to handle the material.  The treatment has several effects, one being that the material’s chloride ions help to make a better junction between the two semiconductor layers.  Also, Chen Li at Oak Ridge National Laboratory in Tennessee found that chloride replaces some tellurium in the CdTe layer.  “That protects electrons and holes from unwanted recombination,” says Li, which allows current to flow more efficiently. 

Major’s team tested several chloride salts as replacements for cadmium chloride, and found that a vapor treatment of magnesium chloride achieved the best results.  Their cells were able to achieve efficiency levels of 13.5 percent, similar to control cells made using the conventional process.  They were also able to match on other factors, such as voltage, current density, and stability.  Other design improvements, such as thinning the CdS layer, increased cell efficiency to 15.7 percent.  While fume hoods and gas masks are required during the cadmium chloride process, magnesium chloride can be deposited using an airbrush.  

Major has already been in touch with the leading manufacturer of CdTe solar cells: First Solar, located in Tempe, Arizona.  First Solar manufactured the world’s largest solar photovoltaic power facility, Arizona’s Agua Caliente Solar Project, which has an installed capacity of 290 megawatts.

“The cadmium chloride treatment is to date a critical part of the CdTe solar cell manufacturing sequence,” says Raffi Garabedian, chief technology officer at First Solar.  “We apply a full and robust set of environmental, health, and safety controls in order to guarantee that we have no adverse impacts as a result of our manufacturing operation.”  Garabedian adds that, "Despite the cost of these controls, the cadmium chloride treatment step is not an major cost driver in our manufacturing process.”  That however is not what Major was told.  “Talking to them privately," says Major, "they said that cadmium chloride was the second biggest expense in their process.”

Regardless of cost implications, replacing toxic cadmium chloride is clearly a sensible move, as we may see more magnesium chloride used in the future.

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

Integrated Circuit Design Changes Could Bring Back Vacuum Electronics


Vacuum TubeBy the mid 1970s, the only vacuum tubes you could find in western electronics were in certain kinds of specialized equipment.  Currently, vacuum tubes are pretty much a nonexistent technology, but that may change in the future.  Some changes to the fabrication techniques used in integrated circuit design could bring vacuum electronics back. 

NASA Ames Research Center has been working to develop vacuum-channel transistors.  While the research is still in the early stages, their prototypes hold great promise.  Vacuum-channel transistors have the potential to work 10 times as fast as ordinary silicon transistors and may have the ability to operate at terahertz frequencies.  They are also much more tolerant of heat and radiation.  To understand why these development may be possible, it will help to understand a little about the construction and functionality of vacuum tubes.  While the vacuum tubes that amplified signals in radios and televisions during the first half of the 20th century seem to not resemble the metal-oxide semiconductor field-effect transistors (MOSFETs) that are used in modern electronics, they do have similarities.  Both are three-terminal devices.  The voltage applied to one terminal, the grid for the vacuum tube and the gate for the MOSFET, controls the amount of current flowing between the other two (from cathode to anode in a vacuum tube and from source to drain in a MOSFET).  This allows both devices to function as an amplifier, or in some cases a switch.   How electric current flows in a vacuum tube compared to a transistor is very different however.  Vacuum tubes rely on a process called thermionic emission, where heating the cathode causes it to shed electrons into the surrounding vacuum.  The current in transistors however comes from the drift and diffusion of electrons between the source and the drain through the solid semiconducting material that separates them.

Solid-state electronics surpassed vacuum tubes due to their lower costs, smaller size, longer lifetimes, efficiency, ruggedness, reliability, and consistency.  However, when solely looking for a medium to transport charge, vacuum beats semiconductors.  Electrons are able to move freely through a vacuum, where they collide with the atoms in a solid state.  This process is called crystal-lattice scattering.  Also, vacuums are not susceptible to the kind of radiation damage that semiconductors are, and they produce less noise and distortion than solid-state materials. When only a few vacuums were needed to operate a radio or television, their drawbacks were not that significant.  However, as circuits became more complicated, it became obvious something needed to change.  For example, the 1946 ENIAC computer used 17,468 vacuum tubes, weighed 27 metric tons, and took up almost 200 square meters of floor space.  The transistor revolution ended these issues.  The great change in electronics occurred not so much because of the intrinsic advantages of semiconductors but because engineers had the ability to mass-produce and combine transistors in integrated circuits by etching a silicon wafer with the appropriate pattern.  As the technology progressed, more transistors could be put on a microchip, allowing the circuit design to become more complicated from one generation to the next. 

After over 40 years, the oxide layer that insulates the gate electrode of a typical MOSFET is only a few nanometers thick, and only a few tens of nanometers separate its source and drain.  While transistors can't get much smaller, the quest for faster and more energy-efficient chips moves forward.  One possible candidate to replace the traditional transistor is the vacuum-channel transistor.  This combines the best aspects of the vacuum tubes and transistors and can be made just as small and inexpensively as any solid-state device.  In a vacuum an electric filament is used to heat the cathode to allow it to emit electrons.  Vacuum-channel transistors do not require a filament or a hot cathode.  If the device is small enough the electric field across it is sufficient to draw electrons from the source by the field emission process.  Removing the inefficient heating element reduces the area each devices takes up and makes the new transistor more energy efficient. Current flows in the vacuum-channel transistors would be done the same as with traditional MOSFETs, using a gate electrode that has an insulating dielectric material, such as silicon dioxide, separating it from the current channel.  The dielectric insulator transfers the electric field where it's needed while preventing the flow of current into the gate.   

While the work being done with vacuum-channel transistors is in the early stages, developments could have a major impact on devices where speed is critical.  The first effort to create a prototype produced a device that could operate at 460 gigahertz, approximately 10 times faster than the best silicon devices.  This offers great promise for the vacuum-channel transistors to operate in the terahertz gap.

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 
All Posts

Current Articles | RSS Feed RSS Feed

Linear v Novellus (Semiconductor Equipment)


After 8 long years, Novellus finally rid itself of the lawsuit with Linear Technology. Irell and Manella LLP, for whom Glew Engineering has worked in the past, took no prisoners in the unanimous jury verdict announced yesterday in favor of their client Novellus.  The jury consisted of 12 men and women in Santa Clara, CA, the heart of the silicon valley.  Certainly good news for Novellus' legal team, as well as their bottom line. Congratulation to Jonathan Kagan Esq. and his colleagues.  Now both sides can get back to what they do best - making chips and chip equipment.

Novellus' also shipped their 1000th Vector PECVD tool in February? Considering the tool's throughput and uptime, there may be as many chips out there by now with Novellus' dielectric films as those of any semiconductor equipment manufacturer. See the details at:


Semiconductor Equipment, Glew Engineering


Its a nice post read on the advantages of the solar energy.Thanks for posting it here.
Posted @ Tuesday, November 01, 2011 4:50 AM by Solar panels georgia
Post Comment
Website (optional)

Allowed tags: <a> link, <b> bold, <i> italics

Follow Glew Engineering

Browse by Tag

Subscribe by Email

Your email: