Recent Posts

Saturday, Sep 13, 2014
Friday, Sep 5, 2014
Tuesday, Aug 26, 2014
Thursday, Aug 21, 2014
Monday, Aug 4, 2014
Wednesday, Jul 9, 2014
Wednesday, Jul 2, 2014
Wednesday, Jun 25, 2014
Friday, Jun 20, 2014
Friday, Jun 13, 2014
Friday, Jun 6, 2014
Wednesday, Jun 4, 2014
Friday, May 30, 2014
Friday, May 23, 2014
Friday, May 16, 2014
Friday, May 9, 2014
Thursday, May 8, 2014
Friday, May 2, 2014
Thursday, May 1, 2014
Thursday, Apr 17, 2014
Wednesday, Apr 16, 2014
Friday, Apr 11, 2014
Wednesday, Apr 9, 2014
Friday, Apr 4, 2014
Wednesday, Apr 2, 2014

Subscribe by Email

Your email:

GLEW'S NEWS BLOG

Current Articles | RSS Feed RSS Feed

Materials Science News: 2-D Phosphorus-The Future for Solar Cells?

 

Semiconductor 

Like most industries, the semiconductor industry is not impervious to economic high and lows.  After having a few rough years the industry is recovering, and along with this recovery, a wealth of development and development dollars have been spent.  This week materials science researchers announced that 2-dimensional phosphorus could be part of the future for the semiconductor industry.  One theory is that 2-dimensional phosphorus could eventually replace the more commonly used silicon; how will this affect the future of semiconductors?

Silicon in Semiconductors

Silicon atoms (specifically in crystalline form) are able to create perfect covalent bonds with each other.  This means that once the bond is made, the atom does not gain or lose electrons easily.  When four silicon atoms bond with each other they form what is called a lattice.  Pure silicon crystals are naturally an insulator, and do not allow much electricity to flow through it.  It is possible to change the behavior of silicon by doping it.  Doping is when you add a small amount of impurity into the silicon, which destabilize the covalent bonds.  There are two different types of doping that are done to silicon:

  • N-type: (When phosphorus or arsenic are added) creates a good negative conductor

  • P-type: (when boron or gallium are added) creates a good positive conductor

Adding either an N-type or P-type dopant turns silicon from a good insulator into a good (not great) conductor, and therefore creates a semiconductor.  While both the N-type and P-type doping is not novel, when they are together they create a diode (simplest semiconductor device).  A diode allows a current to flow in one direction but not the other.

New Research

While phosphorus is not in the same group as silicon or carbon (see periodic table) [1], materials scientists at Rice University have found it to be a promising candidate for "Nano-electronic applications" that require stability [2].  Now to be clear this is not the common element phosphorus.  Rather it is a "two-dimensional phosphorus, [made] through exfoliation from black phosphorus" [2].  Black phosphorus is believed to be the most stable form of phosphorus.  It is created when phosphorus is put at "higher temperatures about 590 °C and higher pressures" or when phosphorus is combined with a "catalyst at ordinary pressures and a temperature of about 200°C" [3].

Researchers at Rice University compared 2-dimensional phosphorus with other 2-dimensional metal dichalcogenides like molybdenum disulfide because of their inherent conductive properties (metals are natural conductors).  Issues have arisen, however, where these other compounds bond-the point where the elements meet (point defect).  A disturbance is created in the flow of the current.  In doped silicon, this doesn't occur because the negative and positive silicon work together to fill in these gaps therefore eliminating a disruption in flow.  When there are "multiple point defects or grain boundaries-where the sheets of a 2-D material merge at angles" the device is no longer useful [2].

Advantages of Phosphorus

2-dimensional phosphorus does not exhibit the same issues at the point defects that other materials tested experienced.  According to calculations done by theoretical physicist Boris Yakobson and his colleagues at Rice University, the point where 2-dimensional phosphorus point defects or grain boundaries exist, the materials semiconducting properties remain stable.  This transpires at the point defects because "atoms jut out of the matrix, this complexity gives rise to more variations among defects" [2].  Also, 2-D phosphorus bonds with itself, this therefore eliminates the recombining of electrons that occurs between hetero-elemental bonds.  2-dimensional phosphorus is very similar to 3-dimensional silicon because they both don't have issues with band-gap changes at ground boundaries.  The key difference however between the two is that 3-dimensional silicon can change its properties from positive to negative at the point defects, and this does not occur in phosphorus.  Another benefit of 2-dimensional phosphorus is that phosphorus exists in abundance on Earth, and the black phosphorus is relatively easy to make.  No production worthy semiconductor equipment available yet for this material. 

Future of Phosphorus Semiconductors

The researchers at Rice University believe that 2-dimensional phosphorus semiconductors could potentially be used to harvest sunlight in solar cells because their band-gap matches well with the solar spectrum.  Due to the way this new phosphorus responds at the point defects, the materials performance would not deteriorate as it has with other materials tested [2].  This is great news for the solar industry that is constantly looking for new ways to improve their products and make them more durable and efficient. 

2-dimensional phosphorus has already been tested in "high-performing electronics, and has already shown it can be a better transistor than 2-D metal dichalcogenides" [2]. 

So far the future looks bright for the use of 2-dimensional phosphorus in semiconductors instead of silicon.  Semiconductors and their success effects our lives every day without people even realizing it.  Semiconductors are in all of our electronic devices, from our smartphones to the computers in our cars.  Their effectiveness is what keeps us connected in today's technology dependent society.  If phosphorus is the answer to fewer interruptions in our devices, then it will be welcomed with open arms because let's be honest nothing is more upsetting than when your smartphone malfunctions.

[1] http://www.mpoweruk.com/images/periodic_table.gif

[2] http://www.rdmag.com/news/2014/09/phosphorus-promising-semiconductor

[3] http://www.britannica.com/EBchecked/topic/68159/black-phosphorus

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

3D Printing Part 2: A Consulting EE's views on 3D Printing in Space

 

Space Image

I am a consulting electrical engineer (consulting EE) and this is part two in my series on 3D printing.  Today I will be discussing the possibility of 3D printing making its way into space, and why I believe it is possible based on my experience with 3D printing.

3D Printer Headed to Space

3D printing seems to be everywhere these days from people's living rooms to Office Depot, and now it seems space is the next stop.  According to NASA they are planning on sending a 3D printer to the International Space Station (ISS).  They have already done some preliminary experiments on the "vomit comet" airplane, which gave enough success to let them move into the next phase of experiments.  Missions 41/42 and 43/44 will be starting in September 2014 and proceeding into 2015.

3D Printing Process

I believe that there is no reason that 3D printing would not work in zero-G.  The process does not depend on gravity-Raw material (filament) is mechanically pushed into a heated chamber which terminates in a nozzle.  It is then extruded in a thin bead and the extruder is moved to lay down the pattern on each layer.  Each successive layer melts into the preceding one and thus sticks where it is placed.

The first layer is the tricky part.  It has to adhere to a bed and this is a universal problem for all 3D printers to solve.  The extruded material has to stick to the bed just enough to hold it in place both during the printing process and also while it cools.  The adhesion has to resist the tendency of the material to shrink as it cools.  Not enough "stick" and the first layer shrinks unevenly on its long axis and curls away from the bed.  Too much "stick" and the unfinished piece cannot be removed from the bed without damage to the piece or bed.

Lots of experimentation is going on to try and achieve a reliable, repeatable bed surface.  There are many hobby solutions and some serious materials science is also happening to find just the right coating for the perfect stick/release surface. 

For more information about a kick starter-funded group that is making some inroads into the solving the problem see the URL: 

http://www.geckotek3d.com/

Benefits of 3D Printing in Space 

However it is achieved, the first layer is extruded onto a bed surface and adheres temporarily, without bonding.  There is a ponteintal advantage to printing in a zero-G environment because the issue of bridging large gaps with molten filament is not a problem.

The traditional issue with gaps is that the extruded material is hanging unsupported as the nozzle travels over a gap.  Imagine a rope suspended over a chasm.  Gravitational forces may cause the viscous material to droop.  The resultant droop in the hot filament material comes about from its viscous state when it leaves the extruder nozzle.  It solidifies as it cools but by then the damage is done and you no longer have straight lines of material over gaps.  In space, this problem goes away, at least in theory.  Perhaps it will be replaced by another problem as the extruder material has some inertia when it leaves the nozzle.  That is something we can learn when the printer gets up there. 

http://www.nasa.gov/mission_pages/station/research/experiments/1115.html#results

 

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

3D Printing Part 1: First attempt by a consulting EE

 
My experience with 3D Printing

I am a consulting electrical engineer (consulting EE), and want to share my first attempt in the world of 3D printing.  Last May I acquired a SeeMeCNC “Orion” delta-style printer at the Maker Faire in San Mateo, California.  Since then I have used three pounds of plastic filament and printed many terrible failures on the road to some beautiful components.  Figure 1 shows an example of a gear from a gear cube that I designed using Solidworks (tm).  The blue part is an early print and is very rough.  The red part was printed after I adjusted the process.  Figures 2-6 show the evolution of a vase throughout the 3d printing process.

early attempts at a complex gear compared too later attempts
 Figure 1: Gears From rough (blue) to smooth (red)

beginning stages of a 3d print of a vase
Figure 2: A 10-hour print run of a vase. 

10 hour 3d print of a vase

 Figure 3: A 10-hour print run of a vase further along in printing.

almost completed 3d print of a vase

Figure 4: A 10-hour print run of a vase almost completed.

3d printing of a vase

Figure 5: A 10-hour print run of a vase.

vase

Figure 6: A completed vase.

The control of 3D Printing

Despite the advances made by countless experimenters, hackers, and hard-core engineers in the field, 3D printing is still in its infancy.  As a hobby, it is comparable to the very early days of personal computers (remember the IMSI 8080?) in which useful results could be obtained but only if you were willing to do a lot of very manual stuff.  As a business, it is not yet plug-and-play, and I have a sneaking suspicion that companies who offer printed parts for hire make a fair bit of scrap that the end customer never sees.  I look forward to more prototyping with 3D printing.

My electrical engineering career has been tied to the semiconductor equipment industry for many years so I am no stranger to process control.  In a semiconductor fabrication factory (FAB), the ability to diagnose, measure, and control fairly complex processes determines ones success.  Tiny variations in gas flow rates, annealing temperatures, etch time, and a hundred other factors can be the difference between a wafer full of pricey graphic processing units (GPUs) and one that is the failure analysis (FA) lab’s worst nightmare.

In my attempt to master the 3D printing process I have had to bring my process control and continuous improvement experience to bear and work out a series of experiments to help me “dial in” my printer.

Deming Circle

Figure 7: The Deming Circle – Classic Continuous Improvement cycle

 

3D Printing Process Variables 

This may seem like a bit of overkill for a “hobby” but is it ingrained in my electrical engineering DNA and I know that careful planning, with incremental change experiments and careful examination and analysis of the results will yield better and better outcomes.  Good results are all about the process control.

There are many process variables that affect the quality of a 3D print.  Like any real-world system, they are interrelated; no single parameter can be changed without having a ripple effect on other parameters.  I have been experimenting in a careful manner with each parameter, a little at a time and printing and re-printing test models in the same fashion I would for a consulting client.  I have designed simple geometric shapes in computer-aided design (CAD), which stress a particular feature or function of the print.   In future posts I will touch on more of them in more detail, but for now, here are some of the “high nails” of the process.

Extrusion Temperature

There is no real standard for the purity or content of any of the plastic filament available today.  As a result, the melt point, glass transition point, and other physical properties of plastic filament will vary from batch to batch and color to color.  The range can be as much as 20 - 30° C!  Too hot and the plastic will dribble out of the nozzle like a bad head cold and too cool and the extruder motor will be unable to push the filament through the nozzle fast enough to give consistent flow.

Flow Rate

3D printers are “dead reckoning” systems.  They depend on stepper motors to drive filament through a heated extruder nozzle and have to guess at how much plastic is coming out.  Clever software calculates expected flow based on filament diameter and commanded filament speed, but there is no feedback in these systems to make adjustments.

Layer Height vs. Extrusion Diameter

In each printed layer, a ribbon of molten plastic is extruded from a nozzle of given diameter.  Each layer sits on top of a previous layer and is flattened slightly based on the height of the nozzle above the previous layer.  Too close and the layer deforms as it is extruded.  Too far away and it may not adhere to a previous layer.  These things are a major factor in surface finish and strength of the final part.

I have learned a lot over the past few months and continue to learn from experiment and collaboration with the vibrant community of owner/experimenters here in the SF-Bay area and silicon valley.  The RepRaP Wiki is an incredible source for information on the general 3D printing subject.  Presently, my success rate is about 70-80% and thus the experiments continue in between prints of artistic or functional pieces.  This is a journey in which my engineering background compliments my hacking spirit.  More to come in the following series on 3D printing.

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

The Need For Engineering Heroes

 

Hero

Recently IEEE writer, G. Pascal Zachary, wrote an article, Where are Today’s Engineering Heroes?  This article describes the lack of engineering heroes in today’s society.  Not only is celebrating heroes a good way to inspire young people and inform the public it is also necessary.  The lack of heroes negatively affects engineering because it diminishes the enterprise in the public eye and constricts the flow of talent into the field.  In a society that hero-worships rock stars and movie stars, serious fields are lacking serious heroes. 

While many would argue that there are plenty of engineering heroes in today’s society: Hewlett and Packard, Steve Jobs, or Bill Gates, those individuals are celebrated mostly for building huge corporations based on the technology created and developed by many.  Basically, the engineers who earn the most fame make the most money.  So, that would lead others to believe that in order to be a hero you must first amass a fortune.  While Zachary states there is nothing wrong with profiting from your ideas, it shouldn’t be the sole marker for a hero in the industry. 

Zachary believes that engineering may be lacking heroes because many truly do not understand the work of engineers anymore.  When Edison created the phonograph in 1877 everybody could relate to the invention.  However, today when an engineer designs a microprocessor with 2 billion transistors instead of 1.5 billion, your average individual does not understand the significance.  Zachary also believes that engineers face a structural impediment since there is no Nobel Prize for engineering, nor is there an engineering award with similar global status and prestige.  While a few engineers have received the Nobel Prize in other fields, without a Nobel of their own, engineers cannot anoint their heroes in the same way physicists, economists, or authors can.  While engineering does have the Kyoto Prize in Advanced Technology, the Charles Stark Draper Prize of the U.S. National Academy of Engineering, and the IEEE Medal of Honor, none of these awards have the same prestige or are as well-known as the Nobel Prize.  Zachary also believes these awards underscore the abiding stereotype that engineers are solely male.  Only one of the 34 recipients of the Kyoto Prize in Advanced Technology and one of the 47 recipients of the Draper prize has been a woman.  Also, of the 95 people that have received the IEEE Medal of Honor award, non-have been women. 

Zachary questions what it takes to become an engineering hero.  He believes that overcoming adversity – whether personal, institutional, or technological – is a valid criterion.  For example, computer scientist Grace Hopper, developer of the first compiler, beat all three.  She succeeded in a male dominated field and institution while shaping the course of computer programming and reaching the rank of rear admiral in the U.S. Navy.  Contribution to the social and cultural well-being of humanity is another criterion for engineering heroism in Zachary’s eyes.  However, throughout engineering history, people have sought to solve technological issues because they were there, not necessarily because they were considering the greater good.  However, many of these inventions did results in benefits for humanity.  For example, mechanical engineer Jacob Perkins created the first refrigerator.  While his invention was far from the refrigerators we know today, it is because of his work that countless lives were saved.  Before the refrigerator foodborne illness and death were a common headline.  If Jacob Perkins isn’t an engineering hero than I don’t know what is.

Zachary then continues by tackling the question: Can heroism be taught, or is it innate?  He strongly believes that heroes are made, not born.  They learn from their experiences, react to opportunities and setbacks, and when others stay in the safe zone, they reach into the grey area searching for something more.  By reaching into the grey area, engineering heroes achieve “charismatic authority”, or the ability to influence, inspire, and lead others, a phrase coined by German sociologist Max Weber.  Charismatic authority does not just apply to those who gain outsize status through media acclaim.  Charismatic engineers can also work on an intimate level by influencing their peers behind the scenes or by challenging the norm through their inventions or designs.  “The history of engineering is replete with examples of unheralded engineers who refused to accept designs that compromised the public welfare, no matter how profitable they were,” said historian Matthew Hersch. “Inventions like the safety match and the safety bicycle not only worked better than their predecessors, but more ethically. To me, the creators of these technologies are the real heroes.”

The most accomplished engineers have tried and failed many times in their careers.  While many know who Cerf and Kahn are, most have not heard of Louis Pouzin.  Pouzin, the creator of an early packet-switching network called Cyclades, envisioned the democratizing potential of computer networking.  In 1975, Pouzin and Cerf led a group that attempted to get a packet-switching standard adopted by the International Telegraph and Telephone Consultative Committee.  Pouzin publicly criticized the telecom industry’s conservatism and shortly thereafter saw his funding and career opportunities diminish.  Cerf and Kahn utilized aspects of Pouzins’ ideas into the TCP/IP design for the Internet.  Decades later, Pouzin is finally receiving some recognition for his contribution.  None of these engineers worked alone, and their accomplishments occurred in parallel with the efforts of others.

While the engineering community values modesty and suspects that promotion conceals distortion or even fraud, Zachary truly believes that heroes and heroism are essential for engineers to gain respect and acknowledgement for their activities and technological developments.

http://spectrum.ieee.org/geek-life/profiles/where-are-todays-engineering-heroes

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

Materials Science Engineering make a more Energy Efficient Fuel Cell

 

 

Hydrogen Fuel Cell

 

 

While renewable energy sources help to fight the effects of global warming, they do have their drawbacks.  Renewable energy cannot be produced as predictably as plants powered by oil, coal, or natural gas.  Ideally, alternative energy plants would be paired with a huge energy storage system that would store and dispense power.  Stanford School of Engineering is working to use reversible fuel cells to combat this storage issue.  Fuel cells use oxygen and hydrogen to create electricity; if the process were reversed, the fuel cell could be used to also store electricity.

"You can use the electricity from wind or solar to split water into hydrogen and oxygen in a fuel cell operating in reverse," said William Chueh, an assistant professor of materials science and engineering at Stanford and a member of the Stanford Institute of Materials and Energy Sciences at SLAC National Accelerator Laboratory. "The hydrogen can be stored, and used later in the fuel cell to generate electricity at night or when the wind isn't blowing."

Fuel cells are not a perfect solution.  The chemical reactions that cleave water into hydrogen and oxygen or join them together are not completely understood – at least not to the degree necessary to make utility-grade storage systems.  Chueh is working alongside researchers from SLAC, Lawrence Berkeley National Laboratory and Sandia National Laboratories to study the chemical reactions in fuel cells in a new way.  In an article published in Nature Communications, Chueh and his team describe how they observed the hydrogen-oxygen reaction in a specific type of high-efficiency solid-oxide fuel cell.  They also took atomic-scale photos of the process using a particle accelerator called a synchrotron.  This type of analysis is first-of-its-kind and help lead to more efficient fuel cells that could eventually allow for utility-scale alternative energy systems.

Electrons Role

In a traditional fuel cell, a gas-tight membrane separates the anode and cathode. Oxygen molecules are introduced at the cathode where a catalyst fractures them into negatively charged oxygen ions.  These ions then make their way to the anode where they react with hydrogen molecules to form the cell's primary "waste" product: pure water.  To perform these reactions, electrons also need to make the journey.  Normally, the electrons are drawn to the cathode and the ions are drawn toward the anode, but while the ions pass directly through the membrane, the electrons can't penetrate it; they are forced to circumvent it via a circuit that can be harnessed to run anything from cars to power plants.

Because electrons do the designated "work" of fuel cells, they are thought of as the critical functioning component. But ion flow is just as important, said Chueh.

"Electrons and ions constitute a two-way traffic pattern in many electrochemical processes," Chueh said.  "Fuel cells require the simultaneous transfer of both electrons and ions at the catalysts, and both the electron and ion 'arrows' are essential."

Electron transfer in electrochemical processes such as corrosion and electroplating is relatively well understood, Chueh said, but ion flow has remained unclear.  This is due to the environment where ion transfer may best be studied -- catalysts in the interior of fuel cells -- is not conducive to inquiry.

Solid-oxide fuel cells operate at relatively high temperatures.  Certain materials are known to make superior fuel cell catalysts.  Cerium oxide, or ceria, is particularly efficient.  Cerium oxide fuel cells can hum along at 600 degrees Celsius, while fuel cells incorporating other catalysts must run at 800 C or more for optimal efficiency.  Those 200 degrees represent a huge difference, Chueh said.  "High temperatures are required for fast chemical reactivity," he said.  "But, generally speaking, the higher the temperature, the quicker fuel cell components will degrade.  So it's a major achievement if you can bring operating temperatures down."

How Does It Work

While cerium oxide established itself strong catalysts for fuel cells, it is unclear why it works so efficiently.  What were needed were visualizations of ions flowing through catalytic materials.  But putting an electron microscope into the pulsing, red-hot heart of a fuel cell running at full bore isn’t exactly possible.  "People have trying to observe these reactions for years," Chueh said.  "Figuring out an effective approach was very difficult." 

In their Nature Communications paper, Chueh and his colleagues at Berkeley, Sandia and SLAC split water into hydrogen and oxygen (and vice versa) in a cerium oxide fuel cell.  While the fuel cell was running, they applied high-brilliance X-rays produced by Berkeley Lab's Advanced Light Source to illuminate the routes the oxygen ions took in the catalyst.  Access to the ALS tool and the cooperation of the staff enabled the researchers to create "snapshots" revealing just why ceria is such aFuel Cell superior catalytic material: it is, paradoxically, defective.  "In this context, a 'defective' material is one that has a great many defects -- or, more specifically, missing oxygen atoms -- on an atomic scale," Chueh said. "For a fuel cell catalyst, that's highly desirable."

Such oxygen "vacancies," he said, allow for higher reactivity and quicker ion transport, which in turn translate into an accelerated fuel cell reaction rate and higher power. 

"It turns out that a poor catalytic material is one where the atoms are very densely packed, like billiard balls racked for a game of eight ball," Chueh said. "That tight structure inhibits ion flow. But ions are able to exploit the abundant vacancies in ceria. We can now probe these vacancies; we can determine just how and to what degree they contribute to ion transfer. That has huge implications. When we can track what goes on in catalytic materials at the nanoscale, we can make them better -- and, ultimately, make better fuel cells and even batteries."

 

 

http://www.sciencedaily.com/releases/2014/07/140709095931.htm

 

 

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

New Class of Electronic Devices Could Come From 2-D Transistors

 

innovationEarlier this spring two separate research projects were building transistors made solely from two-dimensional (2-D) materials.  Argonne National Laboratory researchers described a transparent thin-film transistor (TFT) that they had created in the Nano Letters journal.  They used tungsten diselenide (WSe2) as the semiconducting layer, graphene for the electrodes and hexagonal boron nitride as the insulator.  A week later the ACS Nano journal published that researchers from the Lawrence Berkeley National Laboratory had also built an all 2-D transistor that took the shape of a field emissions transistor (FET).  The Berkeley Lab FET used the same materials for their electrode and insulator layers as Argonne’s TFT, but used molybdenum disulfide (MoS2) as the semiconducting layer.

While the fabrication of transparent TFTs made from 2-D materials could lead to flexible displays with super-high density pixels, the impact of an all 2-D FET could potentially have a broader impact.  FETs are nearly omnipresent, being used in computers, mobile devices, and many other electronic devices.

Issues with FETs prior to Berkeley Lab’s work has been that their charge-carrier mobility degrades because of mismatches between the crystal structure and the atomic lattices of the individual components, namely the gate, source and drain electrodes.  These mismatches result in rough surfaces and in some cases dangling chemical bonds.  The completely 2-D FET developed at Berkeley Lab eliminates this issue by creating an electronic device in which the interfaces are based on van der Waals interactions.  These interactions represent all the attractive or repulsive forces between molecules that are not covalent bonds, instead of covalent bonding.  "In constructing our 2D FETs so that each component is made from layered materials with van der Waals interfaces, we provide a unique device structure in which the thickness of each component is well-defined without any surface roughness, not even at the atomic level," said Ali Javey, a faculty scientist in Berkeley Lab's Materials Sciences Division.  He also said that the approach "represents an important stepping stone towards the realization of a new class of electronic devices."  By having interfaces based on van der Waals interactions instead of covalent bonding, it will be possible to reach a degree of control in material engineering and device exploration that has yet to be seen. 

 

http://spectrum.ieee.org/nanoclast/semiconductors/devices/transistors-made-from-2d-materials

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

Thin-Film Solar Cells May Be Toxic Free In The Future.

 

solar cellCadmium chloride is definitely not healthy to be around.  Its cadmium ions are extremely toxic, and can cause heart disease, kidney disorders, and many other health problems.  It is ironic that such a toxic substance is essential for the manufacturing of clean energy: thin-film cadmium telluride solar cells.  University of Liverpool researchers have discovered a way to work around this however.  They have found that the cadmium chloride can be replaced with magnesium chloride, a safe and inexpensive alternative that could help to decrease the cost and environmental impact of thin-film photovoltaics.  At approximately $0.50 per pound, magnesium chloride is hundreds of times cheaper than cadmium chloride.

This new poison-free process could allow thin-film solar cells to challenge the dominance of silicon photovoltaics, which currently account for approximately 90 percent of the world’s solar market.  There are some major drawbacks with silicon photovoltaics.  They do not particularly absorb light well, so modules require layers of very high purity crystals, each more than 150 micrometers thick.  The price of these silicon slabs is hindering the efforts to reduce the price of solar power.  Thin-film solar cells may be a solution.  By using semiconductors that absorb the sun’s rays more efficiently, similar results can be obtained with sheets of lower purity material that are only 2 micrometers thick.  This results in drastically lower manufacturing costs. 

The leading thin-film technology, which is a sandwich of cadmium telluride and cadmium sulfide (CdTe/CdS), makes up between 5-7 percent of the solar power market.  While the technology is nothing new, CdTe cells have been slow to take off.  However, their efficiency has risen above 20 percent in the lab in the last few years, now only trailing silicon by approximately 5 percent. 

“Now that the efficiency has improved, CdTe can compete commercially with silicon,” says Jonathan Major, a photovoltaics researcher at the University of Liverpool who developed the new magnesium chloride process.  When light hits the boundary region between CdTe and CdS in the cells, it excites electrons that are drawn into the CdS layer (an n-type semiconductor).  As the holes left behind by those electrons fall into the CdTe (p-type) layer, the separation of charge generates a current.  The two layers must be treated with a solution of cadmium chloride or an equivalent to make them function efficiently.  “This process is used by all the [manufacturing] plants,” says Major, and it requires specialized industrial waste processing facilities to handle the material.  The treatment has several effects, one being that the material’s chloride ions help to make a better junction between the two semiconductor layers.  Also, Chen Li at Oak Ridge National Laboratory in Tennessee found that chloride replaces some tellurium in the CdTe layer.  “That protects electrons and holes from unwanted recombination,” says Li, which allows current to flow more efficiently. 

Major’s team tested several chloride salts as replacements for cadmium chloride, and found that a vapor treatment of magnesium chloride achieved the best results.  Their cells were able to achieve efficiency levels of 13.5 percent, similar to control cells made using the conventional process.  They were also able to match on other factors, such as voltage, current density, and stability.  Other design improvements, such as thinning the CdS layer, increased cell efficiency to 15.7 percent.  While fume hoods and gas masks are required during the cadmium chloride process, magnesium chloride can be deposited using an airbrush.  

Major has already been in touch with the leading manufacturer of CdTe solar cells: First Solar, located in Tempe, Arizona.  First Solar manufactured the world’s largest solar photovoltaic power facility, Arizona’s Agua Caliente Solar Project, which has an installed capacity of 290 megawatts.

“The cadmium chloride treatment is to date a critical part of the CdTe solar cell manufacturing sequence,” says Raffi Garabedian, chief technology officer at First Solar.  “We apply a full and robust set of environmental, health, and safety controls in order to guarantee that we have no adverse impacts as a result of our manufacturing operation.”  Garabedian adds that, "Despite the cost of these controls, the cadmium chloride treatment step is not an major cost driver in our manufacturing process.”  That however is not what Major was told.  “Talking to them privately," says Major, "they said that cadmium chloride was the second biggest expense in their process.”

Regardless of cost implications, replacing toxic cadmium chloride is clearly a sensible move, as we may see more magnesium chloride used in the future.  

http://spectrum.ieee.org/energywise/green-tech/solar/thin-film-solar-cell-freed-from-toxic-processing

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

Integrated Circuit Design Changes Could Bring Back Vacuum Electronics

 

Vacuum TubeBy the mid 1970s, the only vacuum tubes you could find in western electronics were in certain kinds of specialized equipment.  Currently, vacuum tubes are pretty much a nonexistent technology, but that may change in the future.  Some changes to the fabrication techniques used in integrated circuit design could bring vacuum electronics back. 

NASA Ames Research Center has been working to develop vacuum-channel transistors.  While the research is still in the early stages, their prototypes hold great promise.  Vacuum-channel transistors have the potential to work 10 times as fast as ordinary silicon transistors and may have the ability to operate at terahertz frequencies.  They are also much more tolerant of heat and radiation.  To understand why these development may be possible, it will help to understand a little about the construction and functionality of vacuum tubes.  While the vacuum tubes that amplified signals in radios and televisions during the first half of the 20th century seem to not resemble the metal-oxide semiconductor field-effect transistors (MOSFETs) that are used in modern electronics, they do have similarities.  Both are three-terminal devices.  The voltage applied to one terminal, the grid for the vacuum tube and the gate for the MOSFET, controls the amount of current flowing between the other two (from cathode to anode in a vacuum tube and from source to drain in a MOSFET).  This allows both devices to function as an amplifier, or in some cases a switch.   How electric current flows in a vacuum tube compared to a transistor is very different however.  Vacuum tubes rely on a process called thermionic emission, where heating the cathode causes it to shed electrons into the surrounding vacuum.  The current in transistors however comes from the drift and diffusion of electrons between the source and the drain through the solid semiconducting material that separates them.

Solid-state electronics surpassed vacuum tubes due to their lower costs, smaller size, longer lifetimes, efficiency, ruggedness, reliability, and consistency.  However, when solely looking for a medium to transport charge, vacuum beats semiconductors.  Electrons are able to move freely through a vacuum, where they collide with the atoms in a solid state.  This process is called crystal-lattice scattering.  Also, vacuums are not susceptible to the kind of radiation damage that semiconductors are, and they produce less noise and distortion than solid-state materials. When only a few vacuums were needed to operate a radio or television, their drawbacks were not that significant.  However, as circuits became more complicated, it became obvious something needed to change.  For example, the 1946 ENIAC computer used 17,468 vacuum tubes, weighed 27 metric tons, and took up almost 200 square meters of floor space.  The transistor revolution ended these issues.  The great change in electronics occurred not so much because of the intrinsic advantages of semiconductors but because engineers had the ability to mass-produce and combine transistors in integrated circuits by etching a silicon wafer with the appropriate pattern.  As the technology progressed, more transistors could be put on a microchip, allowing the circuit design to become more complicated from one generation to the next. 

After over 40 years, the oxide layer that insulates the gate electrode of a typical MOSFET is only a few nanometers thick, and only a few tens of nanometers separate its source and drain.  While transistors can't get much smaller, the quest for faster and more energy-efficient chips moves forward.  One possible candidate to replace the traditional transistor is the vacuum-channel transistor.  This combines the best aspects of the vacuum tubes and transistors and can be made just as small and inexpensively as any solid-state device.  In a vacuum an electric filament is used to heat the cathode to allow it to emit electrons.  Vacuum-channel transistors do not require a filament or a hot cathode.  If the device is small enough the electric field across it is sufficient to draw electrons from the source by the field emission process.  Removing the inefficient heating element reduces the area each devices takes up and makes the new transistor more energy efficient. Current flows in the vacuum-channel transistors would be done the same as with traditional MOSFETs, using a gate electrode that has an insulating dielectric material, such as silicon dioxide, separating it from the current channel.  The dielectric insulator transfers the electric field where it's needed while preventing the flow of current into the gate.   

While the work being done with vacuum-channel transistors is in the early stages, developments could have a major impact on devices where speed is critical.  The first effort to create a prototype produced a device that could operate at 460 gigahertz, approximately 10 times faster than the best silicon devices.  This offers great promise for the vacuum-channel transistors to operate in the terahertz gap. 

 

http://spectrum.ieee.org/semiconductors/devices/introducing-the-vacuum-transistor-a-device-made-of-nothing

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

IEEE 2014 Medal of Honor Recipient Comes from the Field of Power Semiconductors

 

circuit boardB. Jayant Baliga, originally from the outskirts of Bangalone, India, is this year’s IEEE Medal of Honor recipient.  Science and engineering were a part of Baliga’s life from an early age.  His father, one of India’s preeminent electrical engineers, was chairman and managing director of Bharat Electronics Limited.  Baliga developed his interest in science, especially electrical engineering, by immersing himself in his father’s technical library.  Later he studied electrical engineering at the Indian Institute of Technology Madras.  While studying there Baliga found a subject that interested him even more, physics, but switching majors was not an option.  He decided to combine his interests and study semiconductors.  Wanting to avoid living under his father’s shadow, Baliga decided to continue his studies of semiconductors abroad at Rensselaer Polytechnic Institute (RPI) in Troy, New York.   

As a master student studying under Sorab K. Ghandhi, Baliga worked on gallium arsenide semiconductors.  During his Ph.D. work he investigated a technique he could use for growing indium arsenide and gallium indium arsenide semiconductors, a process now known as metal-organic chemical vapor deposition.  This research was extremely dangerous, as the compounds involved would detonate when exposed to air.  Ghandhi was not detoured, and counseled his student to build a reaction vessel that was “really tight”.  After earning his Ph.D. in 1974, Baliga hoped to acquire a research position with IBM or Bell Laboratories.  However, with only a student visa, Baliga was not able to get an interview with either institution.  A fellow graduate student at RPI, who was also working for General Electric Research Laboratory, told him about a position investigating power devices.  Baliga was not thrilled with the possibility of working with power devices, believing that all the interesting work had already been done.  With no other options, Baliga applied and got the job.

Baliga’s early work for GE involved thyristors-semicondcutor devices, which are now mostly used for handling extremely high voltages.  During his studies, Baliga thought it may be possible to get them to work like regular transistors, which can be switched on and off on command.  GE had the need for energy-saving variable-frequency motor drives, and Baliga designed a thyristor-like device that combined attributes of MOSFETs and bipolar transistors.  At this time these semiconductors had not been combined.

Baliga’s colleagues shared his idea with GE’s chairman and CEO, Jack F. Welch Jr., and in 1981 Welch traveled to GE’s research center to be briefed on the new transistor concept.  The meeting went well and within a year the team was fabricating wafers with the new design.  Originally the device was named the “insulated-gate rectifier,” in an attempt to distinguish it from ordinary transistors.  Later Baliga changed with name to insulated-gate bipolar transistor (IGBT) as to not confuse application engineers.

insulated gate bipolar transistor

The IGBT was successful in avoiding catastrophic “latch up” – the thyristor-like continuation of current flow after a transistor is turned off.  However, it was still switching off too slowly to be used for variable-frequency motor drives.  Known methods of upping the speed of a transistor would ruin this type of MOS device.  Baliga created a way to speed up the IGBT: electron irradiation.  While this method had been used on bipolar power rectifiers, it damaged the MOS device.  Baliga figured out a way to apply enough heat to repair the damaged done to the MOS structure while keeping the speed boost.

After one of GE’s investments went badly, Welch decided to sell off GE’s entire semiconductor business in 1988, leaving Baliga’s expertise useless to the company.  While Baliga was ensured he would have a position in management, his heart was still in science.  With other offers holding little promise, and the academic activity in power devices nonexistent in the United States, Baliga chose to create his own research program.  In 1988 Baliga moved to North Carolina State, where he has taught and done research for 25 years now.  Recently, President Obama visited to announce the creation of the Next Generation Power Electronics Innovation Institute and a $140 million US grant to the university that Baliga and his team at the university’s Future Renewable Electric Energy Delivery and Management Systems Center helped to win.

One of the goals of the new institute is to speed the development of MOSFETs and other power devices made with wide-bandgap semiconductors.  In the future wide-bandgap MOSFETs should be cheap and reliable enough to replace IGBTs.  Baliga is ok with this potential outcome.  While he’s the creator of the silicon IGBT, a narrow-bandgap device, Baliga has always supported wide-bandgap devices as well.  While developing the IGBT at GE, he was also creating the first wide-bandgap power semiconductor, a gallium arsenide rectifier.  During this time he also created a way to calculate from basic theory what semiconductor types function best for power devices.  This expression is now known as Baliga’s figure of merit, and highlights the potential of silicon carbide and other wide-bandgap semiconductors.  The challenge, and what Baliga and his students are actively pursuing, is a way to make these devices cheap enough to compete with silicon. 

As always, please feel free to comment below and let the bloggers at Glew Engineering know if there is a specific topic you’d like us to blog about in the future.

 

Schneider, David. (2014, May). The Power Broker. IEEE Spectrum, 52-58.

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

Series on Semiconductor Processing and ICs, Part 14: Inventions that Lead to the Modern Integrated Circuit

 

Phone SwitchboardBelow is Glew Engineering’s 14th article in the series on ICs and semiconductor processing.  These articles are written for those that are not technical specialists in the semiconductor field.  Below we highlight some of the crucial inventions that lead to the common integrated circuit.

Many of the devices that make up today’s integrated circuits were invented long before the technology was available to mass-produce them.  Rectification, photoconductivity, and other basic semiconductor properties were discovered prior to 1900, although they were not fully understood at that time.  By the mid 1930s, simple devices based on these properties were available.  During this time the physics behind the behavior of metal/semiconductor contacts began to be understood.  Much of this understanding was based on the work done by William Shockley and Nevill Mott.  World War II put much of the initial semiconductor work on pause, particularly at Bell Telephone Laboratories where an effort was underway to find a solid-state device for switching telephone signals.  Shortly after the end of the war work resumed and a major breakthrough was seen in December 1947 when a point contact transistor was demonstrated.  The work that followed resulted in the bipolar transistor and resulted in the Nobel Prize in physics for John Bardeen, Walter Brattain, and William Shockley in 1956. 

Renewed interest in semiconductors was seen in the 1950s, when it became apparent that the reliability issues associated with the new transistor structures were related to surface effects.  In an experiment performed in 1953, Brattain and Bardeen found that the surface properties of semiconductors could be controlled by exposure to oxygen, water, or ozone ambient.  Other experiments over the next few years led to the first high-quality SiO2 layers grown on Silicon (Si) substrates. 

The first point contact transistors in 1947 were built in polycrystalline germanium.  Shortly after that, the device was demonstrated in silicon and in single-crystal material.  These developments had significant impacted on integrated circuit of the future.  Single crystals provided uniform and reproducible device characteristics, leading to the ability to integrate millions of identical components side by side on a chip.  Many of the developments associated with developing single crystal source material belong to Gordon Teal of Bell Labs. 

By the mid 1950s, both grown junction and alloy junction bipolar transistors were commercially available.  Germanium was still the dominant material used at this time.  While these junctions were useful components, the technologies used to build them were not extendible to multitransistor integrated circuits.  Exposed junctions were present on the semiconductor surface but no way to interconnect multiple devices was available.  Part of the solution was provided by the invention of gas phase diffusion processes at Bell Labs.  This led to the commercial availability of diffused mesa bipolar transistors by 1957. 

The next major breakthrough came with the invention of the planar process by Jean Hoerni of Fairchild Semiconductor.  This process relied on the gas phase diffusion of dopants to produce N- and P-type regions, as well as the ability of SiO2 to mask these diffusions.  This major advancement was largely responsible for the switch from germanium to silicon.  One final invention was necessary to allow for modern IC technology.  That was the ability to integrate multiple components on the same chip and to interconnect them to form a circuit.  Jack Kilby of Texas Instruments and Robert Noyce of Faichild Semiconductor invented the integrated circuit in 1959.  By combining P- and N-type diffusions and SiO2 passivation layers, many types of devices including transistors, resistors, and capacitors are possible in modern IC structures. 

Since 1960, the basic technologies used to manufacture integrated circuits have not changed.  There have however been significant improvements to depositing, etching, diffusing, and patterning.  While these changes have been evolutionary, they have not necessarily been revolutionary.  The rapid evolution over the last 50 years has been enormous and we should expect many more developments in the years to come.

Jim Plummer, one of the co-authors of the text Silicon VLSI Technology: Fundamentals, Practice and Modeling, earned his PhD degree in Electrical Engineering from Stanford University in 1971.[i]  From 1971-1978, Plummer was a member of Stanford's research staff in the Integrated Circuit Lab.  After working as an associate professor at Stanford, Plummer became a professor of electrical  engineering in 1983.  Plummer has worked in a variety of areas involving silicon devices and technology.  His early work focused on high-voltage ICs and high-voltage device structures.  With the assistance of his team, Plummer made a crucial contribution to integrated CMOS logic and high-voltage lateral DMOS devices on the same chip and demonstrated circuits operating at several hundred volts.  His work led to several power MOS device concepts such as the IGBT which have become important power switching devices.

We hope you enjoyed this overview crucial inventions leading to the integrated circuits we see today.  As always, please feel free to leave a comment below and let the bloggers at Glew Engineering know if there is a specific subject matter that you would like us to cover in the future.

 


[i] https://profiles.stanford.edu/jim-plummer

Plummer, J. D., Deal, M. D., & Griffin, P. B. (2000). Silicon VLSI Technology: Fundamentals, Practice and Modeling. New Jersey: Prentice Hall.

 

For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 
All Posts

Current Articles | RSS Feed RSS Feed

Linear v Novellus (Semiconductor Equipment)

  
  

After 8 long years, Novellus finally rid itself of the lawsuit with Linear Technology. Irell and Manella LLP, for whom Glew Engineering has worked in the past, took no prisoners in the unanimous jury verdict announced yesterday in favor of their client Novellus.  The jury consisted of 12 men and women in Santa Clara, CA, the heart of the silicon valley.  Certainly good news for Novellus' legal team, as well as their bottom line. Congratulation to Jonathan Kagan Esq. and his colleagues.  Now both sides can get back to what they do best - making chips and chip equipment.

Novellus' also shipped their 1000th Vector PECVD tool in February? Considering the tool's throughput and uptime, there may be as many chips out there by now with Novellus' dielectric films as those of any semiconductor equipment manufacturer. See the details at: 

http://ir.novellus.com/releasedetail.cfm?ReleaseID=441840

 

Semiconductor Equipment, Glew Engineering

Comments

Its a nice post read on the advantages of the solar energy.Thanks for posting it here.
Posted @ Tuesday, November 01, 2011 4:50 AM by Solar panels georgia
Post Comment
Name
 *
Email
 *
Website (optional)
Comment
 *

Allowed tags: <a> link, <b> bold, <i> italics

Follow Glew Engineering

Browse by Tag

Subscribe by Email

Your email: