Recent Posts

Friday, Nov 28, 2014
Thursday, Nov 20, 2014
Friday, Nov 14, 2014
Friday, Nov 14, 2014
Saturday, Nov 8, 2014
Friday, Oct 31, 2014
Friday, Oct 24, 2014
Monday, Oct 13, 2014
Monday, Oct 6, 2014
Saturday, Sep 27, 2014
Friday, Sep 19, 2014
Saturday, Sep 13, 2014
Friday, Sep 5, 2014
Tuesday, Aug 26, 2014
Thursday, Aug 21, 2014
Monday, Aug 4, 2014
Wednesday, Jul 9, 2014
Wednesday, Jul 2, 2014
Wednesday, Jun 25, 2014
Friday, Jun 20, 2014
Friday, Jun 13, 2014
Friday, Jun 6, 2014
Wednesday, Jun 4, 2014
Friday, May 30, 2014
Friday, May 23, 2014

Subscribe by Email

Your email:

Posts by Month

    GLEW'S NEWS BLOG

    Current Articles | RSS Feed RSS Feed

    Project Management Elements Part One: The Requirements Document

     

     

    Project Management and the Importance of the Requirement Document

    Project Management: Requirements Document

    In part one of this blog series, I will be addressing the project management element: the requirements document.  A good quality requirements document is a crucial first step in achieving an efficient design process that delivers predictable results.

    As a systems engineer I have found it to be in my own self-interest to take an active role from the beginning in the process of creating a requirements document (you should to).  Here are a few reasons why:

      1. Pain Avoidance:  The sooner you are involved in the creation and definition of the specification, the less likely you are to encounter a requirement that is either a). Impossible or b). Incomprehensible.  When you are a part of the process you can assure that the details are clear and avoid the pain of detective work after the fact to get to the meaning behind the language of a requirement.

      2. Skin in the Game:  When you are a part of the creation of the requirements document you have a stake in the project from the beginning.  Other members of the specification team are more likely to view you as a collaborator than as an adversary.

      3. Advance Warning:  The sooner you see something on the horizon; the sooner you can formulate a plan to deal with it.

      4. Get the Spec You Want:  When you help the team follow "the rules" it is more likely that they will.  The result will be a detail-rich specification that helps your development team avoid surprises.

      Essential Requirements for a Good Specification

      IBM has done a lot of pioneering work in defining and codifying the engineering process.  They did this initially as a self-defense mechanism to manage their internal growth across a huge, multi-national enterprise.  Eventually, some bright spark in the company recognized that a software tool could be developed and sold to help others collaborate and manage this critical task.  Their product, Rational DOORS™, is aimed at software developers but is useful to every engineering development discipline.  The IBM website for the Rational DOORS™ product is not just a sales site.  It also contains a lot of useful educational material in the form of white papers on the specification process.  I think they got a lot of ideas exactly right.

      There is a great IBM white paper that details some essential elements of a high-quality requirements specification as they apply to their software product: Ten Steps to Effective Requirements Management.   They may seem obvious to you but it is amazing how many so-called “complete” market requirement documents (MRD) I have received over my career that violated one or more of these ideas – to the detriment of the engineering development project that followed.

      Each Requirement Should Be...

      • Correct:  A requirement is technically accurate and legally appropriate.  It needs to address a specific market need in a way that follows appropriate standards and regulations.

      • Complete:  A requirement presents a complete idea.  Towards this end, requirements should use complete sentences, rather than bullet points.

      • Clear:  A requirement is clearly defined and not ambiguous.  It needs to use language and terminology that is meaningful to the developers.  As the systems engineer, your job is to help non-engineers on the team get there.

      • Consistent:  A requirement must not be in conflict with other requirements.  It's amazing how often this happens.  This is another area where savvy systems engineers can help the group achieve clarity.

      • Verifiable:  A requirement must be verifiable.  How will the team know if they have met the requirement?  A good requirement has the metrics specified.

      • Traceable:  Each requirement is distinctively identified and tracked.

      • Feasible:  A requirement must be effectively addressed within the specified cost and schedule constraints.  This area is the biggest source of negotiation between the engineering group and the marketing group.  The system engineer can really contribute here by bringing specific knowledge of the development group’s capabilities and loading to the table.

      • Modular:  A requirement must be easily changed without excessive effort.  Creeping features are a fact of life for a development group.  The goal here is to keep requirements sufficiently granular so that a change to one does not ripple outward too far.

      • Design-Independent:  A requirement does not pose specific solutions on design.  As far as I am concerned, this is the easiest goal to state and the hardest to actually accomplish.  A customer will seldom tell you the actual problem he needs solved.  Instead, he will demand something that if he had it would make his problem go away.  Your challenge is to keep chipping away at the “requirement” and get to the problem underneath.  The system engineer’s participation at the front end of the specification process helps to keep it free of codified customer suggestions.

      Once complete and signed off within the organization, the requirements document will serve as the model for the project deliverables until a completed engineering specification replaces it.  Part two in this blog series will deal with the completed engineering specification.

      For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

      An Engineer's Guide to Thanksgiving

       

       

      An Engineer's Guide to Thanksgiving Using Project Managment Approaches

      An Engineer's Guide to Thanksgiving

      In my home, I do the cooking for Thanksgiving dinner.  I love being the chef but engineering is in my blood so I view Thanksgiving as just another project to be managed.  Here’s how a professional electrical engineer and project manager approaches the big feast.

      Market Requirements

      The start of a development project is usually a Market Requirements Document (MRD).  It’s the wish list from the customers for the solution to their problem.  In this case, a great Thanksgiving dinner.  Notice, I didn’t say “marketing” requirements.  That’s because a proper MRD reflects what the market is asking for, not what the marketing department wants.  Of course, since in my house my wife is the marketing department as far as dinner is concerned, I suppose marketing gets to add some requirements as well.  Perhaps this happens in your company as well.

      An Engineer's Guide to Thanksgiving: Dinner MRD

      Figure 1: Dinner MRD

      For my part in the Thanksgiving dinner, the MRD is the menu.  Figure 1 is the MRD I received this year.  What it has in common with a majority of MRDs I have received throughout my engineering career is a lack of detail.  So, the next step is to go back to marketing and try to flesh out the requirements list with as much detail as possible.  This includes nice-to-haves as well as hard requirements.  The better the project manager does in this step, the happier the customer is likely to be with the final product.  So it behooves me to do a thorough job in this step.

      Engineering Specification

      The chef (engineer) responds to the MRD with an engineering specification, which is a proposal as to what will actually be designed (cooked).  By agreement, the engineering spec replaces the MRD as the target and the final deliverables are compared against this document.  Figure 2 is my actual menu for the dinner.

      An Engineer's Guide to Thanksgiving: Final Menu

      Figure 2: Final Menu

      As you can see, I work in Excel at this point to help me organize my thoughts and also because I can later expand outwards and turn the menu into a shopping list.  Figure 3 shows an excerpt from the total list.  My “Bill of Materials” shopping list expands outward from each menu item and helps me assure that nothing is forgotten for the big shopping trip.

      An Engineer's Guide to Thanksgiving: Shopping List ExcerptFigure 3: Shopping List Excerpt

      One year my wife was in the checkout line at the market and was carefully checking the contents of her cart against the printout when a person in line behind her asked, “Your husband is an engineer, isn’t he?”  Guilty as charged.

      The Project Plan

      Antoine de Saint-Exupery, a French writer in the early 1900s, is credited with saying “A goal without a plan is just a wish”.  Since I strongly wish for my dinner to finish on time with minimal fuss, it’s time to make a plan.  In future blogs I will spend time on the Work Breakdown Structure (WBS) as a tool for planning, but for now let’s just suppose that we have already gone through that exercise and we have a list of the things that must be done in order to get to the finish line and also some realistic estimates of the time needed to accomplish them.   In this case, your list of tasks should come from your recipes.

      My favorite tool for project planning is MS-Project but there are free, web-based tools out there that are satisfactory for more simple plans.  The first step is to get the tasks entered into the project tool with the time estimates.  Figure 4 shows an illustrative part of the total plan.

      An Engineer's Guide to Thanksgiving: Tasks With Time Estimates

      Figure 4: Tasks With Time Estimates

      The tasks shown come from the recipes I plan to use.  This year I will be using a fabulous recipe from America’s Test Kitchen[1] which I tested on some tolerant guests a month ago as a proof of concept (another facet of good engineering practice).  It is a variation on a Julia Child technique which dismantles the bird and cooks the main pieces on a bed of stuffing.  I can’t recommend it highly enough!  Tasks 3-10 represent the process.

      There is a green arrow at the top right corner of the chart which represents my “delivery” goal.  At this point I don’t know if I have enough time to do these things or not.  As a start, I have set my work day as beginning at 6AM.  I won’t know if I am in trouble for time until I add the dependencies, relating the tasks that have to happen before other tasks can happen.

      This is my exact process for every engineering project.  Wait until the project software gives you the “answer” before worrying about compromises or more creative ways to get the project accomplished.  Figure 5 shows the result of adding the dependencies.\

      An Engineer's Guide to Thanksgiving: Major Dinner Elements With Task Dependencies

      Figure 5: Major Dinner Elements With Task Dependencies

      A lot of the dependencies are simple linear chains.  There are a few others that are more interesting.  Task 8 requires the stuffing to be ready before it can be completed so you can see an arrow between task 20 and task 8.  I could have chosen to get the stuffing ready to go any time before about 1:30PM but I prefer to have a “just in time” delivery and so have set a “negative” dependency between tasks 8 and 13 which starts the stuffing 2 hours before I need it.

      Here are a few other things the plan shows:

      1. We come in on time!  Of course, there are other tasks to be added before we have the total menu complete but for this example everything looks good.  As expected, the turkey is on the critical path.

      2. I will be busy from around 6AM to 8AM, 11:30AM to 1PM, and 3:30PM on.  This suggests that I indeed have room for other cooking and prep tasks throughout the day.

      3. I have color coded times when our single oven will be busy.  I will need to be mindful of this “resource” conflict as I add other cooking tasks that might want the oven.

      If this all seems like a lot of work, consider the benefits:

      • I am confident that I am not going to forget anything on the big day.

      • I know what to do and WHEN to do it.

      • I can sleep the night before without worry.

      • I get to have fun in the kitchen without stress.

      May your holiday be free from trouble and stress.  Let me know if you try the ATK recipe and how it worked for you.  Happy Thanksgiving!

      To view the completed project plan see the link below:

      Complete Thanksgiving Project Plan

      [1] http://www.americastestkitchen.com/recipes/7483-julia-childs-stuffed-turkey-updated

      For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

      Robotics: Semiconductor Industry Prepares for 450mm Wafers (Part 2)

       

       

      Semiconductor & Robotics Industries Prepare for 450mm Wafers

      450mm Wafer

      Semiconductor Industry Prepares for 450mm Wafers

      The semiconductor industry has been manufacturing products using 300mm diameter wafers for more than 10 years.  The push to transition to 450mm wafers, while it provides a huge benefit in available area for chips, is causing tool and robotics vendors to reexamine, and in many cases redesign systems to cope with the emerging standard.

      I’ve been involved with the semiconductor industry since the 1980s and I remember when the 200mm wafer became the standard.  I was running the engineering group at Electroglas and we were scrambling to extend our prober to deal with Intel’s insistence that they were not going to buy any prober that was not 200mm compatible.  As it turned out, we were early to the table.  The EG Model 3001 prober became available in 1986 but real acceptance of 200mm wafers did not happen until the early 1990s. 

      Fast forward to 2000.  I’m working for PRI/Equipe Automation doing wafer handling front ends and I was visiting the Semiconductor Equipment Manufacturing International (SEMI) headquarters on a 300mm wafer-related issue.   There, I saw on the wall a sample 450mm wafer.  My first thoughts were “How were we going to contend with these manhole-sized wafers?” and “How much trouble were we in?”

      450mm Wafer Scale Comparison

      Figure 1: Wafer Scale Comparison

      Evolution of Wafers

      As Figure 1 shows, 450mm wafers are just the latest in a long line of wafer sizes stretching back to the 1970s.  It is interesting to note that just because Samsung and TSMC are working in 300mm wafers doesn’t mean that somewhere in the world is some small fab or lab still cranking out 100mm wafers full of jelly-bean parts like 2N2222 transistors (yes, those are still around).  All that production equipment has to go somewhere.

      As it turns out, the transition from pipe dream to reality for 450mm has been slow, to say the least.  Advances in semiconductor device technology like Vertical MOS FET transistor architecture are improving chip density on current 300mm production and pushing out the need for the bigger wafers.  Even so, current projections are for roll out in the 2017-2018. That’s just around the corner and even though these transitions have been traditionally later than projected it’s more than time to consider what will your company do to contend with the next (really) big thing.

      450mm Wafers and the Issues Wafer-Handling Robots Face

      Robot Wafer Reach Needs to be in the 750mm Range to Accomodate 450mm Wafers

      Robot Chamber Layout for 450mm Wafers

      Figure 2: Simple Robot to Chamber Layout

      Figure 2 shows a wafer-handling robot moving a 450mm wafer into a process chamber.  It has to contend with clearing a load lock as well as placing the wafer on the receiver within the process chamber.  The robot reach has to be in the 750mm range and perhaps even more.  This means longer arms, stiffer bearings and joints to prevent cantilever sag, and possibly a new end-effector design to deal with the increased mass and flexibility of the new wafer.

      Robot Designers and Semiconductor Test Equipment Companies Await Finished Standards

      There are SEMI standards for the new wafers [1]. and cassettes but they are still being revised.  This is forcing robot designers and semiconductor test equipment companies to take a bit of a gamble on the fact that key dimensions are going to remain stable.  Presently, wafer thickness is specified at 925um +/- 25um.  This puts the weight of a 450mm wafer around 343g, as opposed to the 113g weight of a 300mm wafer.  Robot motion profiles will have to change to assure that wafers don’t “flap” as they are swung around by the robot.

      End Effectors

      End Effectors (EE) to transfer the wafers need to be longer and stiffer.  They also have to be thin enough to operate with margin in the gap between slots of a cassette or FOUP.  The current spec puts the gap at 10mm.  And the wafers droop when supported by their edges, as much as 940 +/- 20 mm [2].  Plan for a little margin and this leaves about 7mm at most for the wafer plus end effector stack.  Plan for an EE at least 300mm long supporting a mass of 343g and perhaps 5mm thick.  That’s a real challenge for your company’s ME team. 

      450mm Wafers Cause Engineers to Rethink Handling

      Of course, systems will also get larger as a consequence of the move to 450mm.  The Figure 2 example results in a minimalistic system at least 1.7m across without framing or skins.  Not so obvious might be the human factors issues related to 450mm wafers.  A cassette of 25 wafers could easily weigh 20-25lbs.  Operators are not likely to be carrying those around.  Even handling a single wafer manually may be a dicey proposition.  Does your company need to provide a transport helper as part of your system?  Even something as mundane as an optical probe mark inspection microscope may not be practical if an operator has to lean out across a large system to get their eyes up to the eyepieces.

      Time to Take Stock of Automation Equipment

      If your company has not already done so, you may wish to consider treating the emergence of 450mm wafers to a Failure Mode Effects and Analysis (FMEA) survey for your flagship tool.  An experienced systems engineer or a consulting firm can help set this up and bring an unbiased assessment to light.  In the end, changes resulting from the move to 450mm wafers will be a matter of when, not if.

      [1] SEMI M1-0414 Specification for Polished Single Crystal Silicon Wafers

      [2] 450 mm silicon wafers challenges – wafer thickness scaling, Goldstein, m and Watanabe, M., ECS Transactions, 16 (6) 3-13 (2008) 10.1149/1.2980288

      For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

      Finite Element Analysis of Athletic Equipment: Baseball and Hockey

       

      Finite Element Analysis of Athletic Equipment

       Finite Element Analysis of Baseball Bats

      As we continue our series on stress analysis in athletic equipment, we will take a closer look at hockey sticks and baseball bats.  Mechanical engineering plays an expanded role in the designing and testing of athletic equipment.  Mechanical engineers use both computer aided design (CAD) and finite element analysis (FEA) to determine that the stresses seen in hockey sticks and baseball bats is different from the stresses seen in the shoes and helmets worn by the players.  Shoes and helmets repeatedly experience smaller stresses, while hockey sticks and baseball bats will experience a sudden very high concentration of stress as they collide with a ball or puck. In this blog we will discuss how the strength of this equipment is determined using FEA.

      FEA is one tool used in analysis

      A baseball bat is arguably the most important tool throughout the course of a baseball game. Coincidentally, it is also the piece of equipment that is most likely to be broken. the first is that the grain near the handle is subjected to too much force and the second being the bat reaches its failure stress at the critical point. A finite element analysis of the bat showed that a bat-ball collision that is about 5 inches away from the sweet spot, known as the critical point, can result in stress upwards of 4 times higher than the stress at the sweet spot itself [1].  The dynamic stress in the handle can reach over 30,000 psi for about 0.007 seconds, which is about twice the average strength of most hardwoods that are used to make bats. Due to the natural elasticity of wood and its shock absorbing nature it is able to survive stresses like this, but after repeated impacts fatigue will begin to set in. Fatigue starts with micro cracks in the bat and as the stress cycle is repeated, these cracks grow until they cause a fracture in the bat [2].  A large impact on bat strength obviously comes from the wood that the bat is made of. The most popular choice used to be white ash because it was less dense than most other woods, but after Barry Bonds’ record breaking homerun season, where he used a maple bat, many abandoned the white ash in hopes of finding more power. The cons of using a maple bat though, are that it is much stiffer and cannot handle the same stresses that a white ash bat could due to ash’s natural ability to flex. Other than the natural strength in maple and ash wood their grain patterns have a large impact on the breaking point of the bat. An ash bat has two types of grain in a bat, edge-grain and flat-grain, while a maple bat’s grains all run the length of the bat. The edge-grain is seen running parallel to the length of the bat, while the flat-grain is the semi-circular lines that run perpendicular to the length of the bat. If an ash bat contacts the ball on the edge-grain it transfers the impact forces solidly to the ball, however if it makes contact with the ball on the flat-grain the bat is likely to flake because the stress is too high. The flat-grain of the bat can’t handle as much stress and experiences fatigue at a much faster rate than the edge-grain of the bat, but the edge-grain can handle more stress than any part of a maple bat.

      Finite Element Analysis of a Hockey Stick

      A hockey stick is designed very differently than a baseball bat and doesn’t fail quite as often. A hockey stick has the ability to bend and snap to create a whipping type action to send the puck towards the goal. A shot of 100 mph puck speed would require a stick to undergo 560 N of force that would cause a 300 deflection, or about 15 cm. The majority of wooden hockey sticks are made using rock elm for the shaft which has a maximum tensile stress of 15 MPa and a maximum compression stress of 11 MPa. However new sticks are normally made with a polyetuylene fiber which has a maximum tensile stress of 3500 MPa and are therefore much stronger [3].  The newer stick designs require less force to create more deflection and therefore can shoot the puck faster. Similar to baseball bats, hockey sticks experience fatigue over time, but because of the synthetic nature of the material, fatigue leads to a weakening of the fibers instead of micro cracks in the material which allows for greater durability and a longer lifespan.

      [1]  Sherwood, James “Characterizing the Performance of Baseball Bats Using Experimental and Finite Element Methods” University of Massachusetts, Lowell Massachusetts

      [2]  Boucher, Kyle “Impact Stresses in Wooden Baseball Bats” Worcester Polytechnic Institute

      [3]  Hache, Alain “The Physics of Hockey” The Johns Hopkins University Press, Baltimore, MD. 2002

       

      For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

      Robotics: Collaborative Robots (Part 1)

       

      Collaborative Robots Making a New Workplace Paradigm

      Collaborative Robots

      Robots in the Workplace

      Over the course of my career I have designed and built a number of robotic material-handling systems.  The rules for designing such systems were always the same: Robots and Humans don’t mix.  In the past many safe guards were put into place to insure the safety of both the human operators and factory workers.  Today the push for higher levels of integration of robotics in manufacturing environments is forcing system designers to challenge these notions of exclusion and replace them with new techniques for robots and humans to interact.

      Introducing Collaborative Robots "Cobots"

      A new outlook on how robots and humans can interact in the workplace is emerging.  This has lead to the creation of the collaborative robot or as it is more commonly referred “Cobot”. 

      Terms that are frequently used in the robotics industry are: Collaborative Robot (Cobot), Collaborative Workspace, and Collaborative Operation (Human-Robot Interaction or HRI). [1]

      • Collaborative Robot– A robot designed for direct interaction with a human within a defined collaborative workspace.
      • Collaborative Workspace– A safeguarded space where the robot and a human can perform tasks simultaneously during production operation.
      • Collaborative Operation (Human-Robot Interaction or HRI)– A state in which purposely designed robots work in direct cooperation with a human within a defined workspace.

      Cobots Require New Safety Standards

      Imagine a work cell in which a robot is hefting an engine block.  The robot swings the engine block around from a machining station and presents it to a human operator for inspection or the insertion of some fine-detail components.  The robot then swings the engine block back into place for a next series of operations.  While the robot is performing its tasks the operator is preparing the next set of components, and the process repeats.  This scenario is easy to describe but carries a whole host of safety-related issues.

      Safety is the primary concern in regards to the development of Cobots.  In a recent article published by http://www.robotics.org, Michael Gerstenberger a Sales Senior Applications Engineer at KUKA Robotics Corporation, and member of the R15.06 safety subcommittee, begins some of his safety presentations by saying “There is no such thing as a safe robot.”

      “Meaning the robot itself is only part of the equation,” he explains. “Even if I can make the robot so it won’t move fast enough to smack you and it won’t press hard enough to squeeze you, if you put a knife or a laser or a drill at the end, it’s going to hurt you. It often depends more on what it’s doing and what kind of tool it has rather than the robot itself.”[1]

      There is major work being done in the U.S., and other countries to revise old safety standards so that developers can proceed with tool and system designs based on intended applications.  The old safety standards tried to put limits on robots in terms of speed, force, etc.  This mindset is what led to physical keep-out areas filled with interlocks.  The new safety standards remove such limitations and instead use the concept of task-defined safety. 

      Currently, there are two such standards:  ISO 10218:2011 and U.S.-adopted ANSI/RIA R15.06-2012.  These are “live” standards in that active committees exist to continue to develop and refine what it means for a robot to be truly collaborative.

      The Future of Collaborative Robots

      From a developer’s point of view, robotic systems will need to make greater use of sensor technology to detect when a human enters a work area, and software technology so the robot can decide what to do about it. 

      From a commercial standpoint, the industry is becoming crowded with major and minor players, each with their own spin on how to get their piece of the pie.  This was evident at a conference in San Jose, California called International Collaborative Robots Workshop held by the Robotics Industry Association.  This one-day event attracted over 20 vendors, each showing off their offerings in the marketplace with tools ranging from heavy equipment movers to little desktop units.[2]

      It’s a great time for robotics developers.  The new paradigm about how robots and humans can interact successfully and safely in the workplace is opening up lots of new opportunities.  If you want to get up to speed on the topic, a great place to start is http://www.robotics.org.  We’re not quite at the Robin Williams character in Bicentennial Man, but at least we’re moving in the right direction.


      [2] http://www.robotics.org/content-detail.cfm/Industrial-Robotics-Featured-Articles/The-Realm-of-Collaborative-Robots-Empowering-Us-in-Many-Forms/content_id/4854

      For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

      Tester-Prober Interfaces: Direct Probe (Part 2)

       

       

      Tester-Prober Interfaces: Direct Probe and System Interface Implications

      Generic Tester-Prober Interface

      Image on the left courtesy of Advantest

      In this installment of our blog series on Tester-Prober Interaces and probers, we discuss the latest evolution of direct device under test (DUT) connection and the implications on the semiconductor wafer probing industry.  The move to Direct Probe™ benefits the end user with improvements in speed and testing performance, but causes difficulty for equipment designers implementing the new interface technique.  This article contiunes a series, part 1 reviewed the issue of interfacing a piece of Automatic Test Equipment (ATE) to the DUT, a wafer.  For more information on this topic click the link below:

      http://www.glewengineering.com/blog/bid/105402/Tester-Prober-Interfaces-Direct-Probe-Part-1

      Probe Cards and Impedance Matching

      Generic Tester-Prober Interface Architecture

      Figure 1: Generic Tester-Prober Interface Architecture

      Figure 1 shows a generic model of a piece of  automatic test equipment, or "tester". The ATE contains  the following main elements: (1) a workstation as the user interface and programming station, (2) a mainframe which houses all the power distribution and smarts to execute tests and collect returning data, and (3) a test head, usually at the end of a mechanical manipulator.  The test head of the ATE contains the actual instruments to drive signals, supply operating power, and measure response from the DUT (Roberts 12-13).

      Analog instruments present impedance challenges, but for the purposes of high-speed memory and logic testing, the digital I/O is designed for 50 ohm transmission line impedance.  In order to maximize signal fidelity and minimize loss at speed, probe cards contain matching circuitry to terminate the test connections as close to the DUT as possible.  Even the signal paths through the probe card must be designed as transmission lines with internal impedances as close to 50 ohms as possible.

      Modern probe cards and Device Interface Boards (DIB) cause additional complexity and cost. The probe cards and DIBs for the electrical engineers who design the printed circuit boards (PCB) to follow very complex design rules.  For example, the PCBs must have equal path lengths, and ground lines and ground planes to guard the signal lines.  This makes for higher layer count PCB, which increases cost.  Some high pin count PCBs now have 36 layers.

      The impedance termination circuitry takes real estate on the boards and there just isn’t that much of it on a 9.5-inch or even a 12-inch round probe card.

      Generic 12 inch Probe Card resized 600

      Figure 2: Generic 12-inch Probe Card

      Figure 2 shows a generic 12-inch diameter wafer probe card.  There is not much usable space for the circuitry on the PCB after excluding the contact areas for the POGO pins, the probe array, and stiffener. 

      High Pin Count Causes Excessive Probe Force

      High pin count causes excessive probe force and deforms the PCB, thus degrading the probing precision.  High pin count probe cards also carry some serious mechanical burdens.  A typical tungsten probe tip (either vertical or cantilever) acts like a spring and exerts force based on its probing displacement.  Values vary but tend to hover in the range of 3gmf per 25um of compression.  A probe card for a Graphics Processing Unit (GPU) can have 4000 pins spread over a 1 square inch area.  If the probe card lands and compresses the pins by 75 micro- meter, then it causes a force of 36kg (79 psi).  

      As a result of the high pin count causing excessive probe force, mechanical engineers have reinforced stiffener frames, up to 1 inch thick, to keep the board flat.  In one case mechanical engineers employed a steel cap over the area of the probe array.  The PCB is thicker as a consequence of the high layer count and the need for additional stifness.  Some probe cards are up to 0.25 inch thick, which increases cost.

      Signal Fidelity Issues

      I/O Connection Path

      Figure 3: I/O Connection Path

      Figure 3 shows the POGO™ based connection path from test head to wafer.  Every point that a spring contact touches a PCB there is an insertion loss and slight mismatch between the ideal 50 ohm drive and the actual impedance of the board.  On top of that, the 4-5 inches of space separation through the POGO™ tower adds delay and attenuation to the signals.  These factors limit the upper-end performance of such probe card schemes to the tens of megahertz, depending on test conditions, voltage margins, etc. 

      There are still a lot of these POGO™ tower setups in fabs around the world.  I worked on diagnostic system designs and installations at some major players in New York and Dresden in 2013, and for their applications this scheme worked fine. This POGO™ tower isn’t dead by a long shot; it just has its limitations.

      All-in-One Prober Interface

      For those on the bleeding-edge of high-speed test in the GPU and CPU world, the Advantest V93000 Direct Probe™ test system has taken a major step towards improving signal integrity and test speed by combining the functions of the DUT board and probe card into a single high speed board.  At the same time, they have completely done away with the POGO™ tower.

      Direct Probe vs. Conventional Interface

      Figure 4: Direct Probe vs. Conventional Interface.  Image courtesy of Advantest.

      As Figure 4 shows, the Direct Probe scheme eliminates several signal transitions and cuts path length to the bare minimum.  An additional benefit is having more real estate in the central region of the essentially 480mm x 600mm probe card for signal conditioning, switching, and termination circuitry.

      from 12-inch to 480mm x 600mm

      Figure 5: From 12-inch to 480mm x 600mm

      Figure 5 shows a scale comparison between the 12-inch and the loadboard-sized probe card of the V93K.  The outer edges of the board are reserved for contact to the pin electronics but the middle third can get circuitry.

      System Implications of Direct Probe

      This technology boost comes at a price and the implications ripple outward to every supplier and maker of hardware that connects to Automatic Test Equipment.  I will go into some of these issues in more detail in later posts, but here are the high points (see figure 6):

      V93000 Test Head Courtesy of Advantest

      Figure 6: V93000 Test Head.  Image courtesy of Advantest.

      Probe Card

      • The surface area has quadrupled from about 113 to 446.4 square inches.  This turns an already expensive, high layer count PCB into a REALLY expensive PCB.  Such boards are easily in the 10s of thousands of dollars and I’ve seen some go over the 100K mark with a complex probe array attached.

      • The traditional stiffener design is not up to the probe force requirements.  It was designed with just the tester pin electronics in mind and is both too thin and has a large open area in the middle third where the probe array ends up.  Advantest’s solution to this problem was the creation of a huge bar of metal called the “Bridge Beam” that spans the central open area.  It also serves as a mounting location for wafer probers.

      Wafer Prober

      • Because the board docks directly to the test head, this means that the test head of the ATE docks directly to the prober.  And, since the probe card wants to be mounted inside the prober, the implication is that the test head has to go into the prober.  This affects the whole mechanical architecture of the prober, from the top deck to internal probe card changer.  In my last company, I had to work out an entirely new system architecture for our tool.  We spent more than a year bringing it to market and had to solve major loading and vibration coupling issues.

      Direct Probe™ technology has brought a major improvement to automatic test equipment, and is one of the factors that has kept Advantest on top for decades.  As a system design engineer who has designed automatic test equipment for decades, I duly appreciate this technology.   

       

      [1] Roberts, Gordon, and Mark Burns.  An Introduction to Mixed-Signal IC Test and Measurement.  New York: Oxford University Press, 2001.  Print. 

       

      For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

      Tester-Prober Interfaces: Direct Probe (Part 1)

       

      Tester-Prober Interfaces: The Move to Direct Probe

      Automatic Test Equipment Replaces Inefficient Manual Positioning

      The field of semiconductor automatic test equipment (ATE) has been rich with challenges and opportunities since the first transistors were mass-produced by Texas Instruments in 1954 [1].  The earliest forms of device testing were geared to laboratory work, with an electrical engineer peering through a microscope and manually positioning a few electrical probes on the surface of a device.  The probes would be connected to a collection of power supplies, meters, and perhaps an oscilloscope.

      This approach, called “rack and stack”, does not lend itself to mass production as it is both slow and an inefficient use of an expensive engineer’s time.  By the late 1950s, companies began producing integrated pieces of test equipment, and by the early 1970s tools (“testers”) that combined all the separate equipment into a single box that could be programmed to apply stimulus to a device and measure the response therefore creating ATE.  In 1972 Advantest Japan announced its T320/20 LSI tester [2].

      Two other pieces of the puzzle were the creation of the probe card and the wafer prober.  The probe card is, at its heart, a printed circuit board (PCB) with electrical probes in precise positions to match the connection points on the individual device to be tested.  The PCB, at one end, holds the probe tips precisely in a “probe array”.  At the other end, it has an electromechanical interface to the tester.  In the beginning, testers connected to probe cards via cables and connectors.

      The wafer prober is a specialized robot.  Its purpose is to move a wafer full of devices to be tested beneath a stationary probe card – die by die – and then bring the wafer into contact with the probe card.  This was an essential step in the move to true mass production of semiconductor devices.  Electroglas introduced the first commercial prober for testing semiconductors (the EG 900) in 1964 [3].  The prober has the mechanical challenge of handling wafers with very high precision and registering them to a probe card that is mounted within it (Roberts 13).

      Testing System Evolution

      The basic system challenges have remained the same for the last 50 years.  Figure 1 shows an example of a system diagram for a wafer test implementation.

       Basic Wafer Test System Diagram

      Figure 1: Basic Wafer Test System

      Figure 1 shows the 4 major components of the classic test system; the tester, the electromechanical interface between the tester and the probe card, the probe card, and the prober. [Of course, you need wafers or the whole system is pointless, but I’ll leave them for the subject of a future blog]

      Automatic Test Equipment Evolution

      ATE have had to keep pace with the constant size reduction of devices and the need for more and more test lines at higher and higher speeds.  Modern testers now have to contend with hundreds of signal pins at GHz speeds and supply multiple power sources for CPUs and GPUs that suck current in amps.  The need for speed and power has rippled through every other piece of the system.  Modern systems can supply up to 1024 test lines that can run at GHz switching speeds.

      Electromechanical Interface Evolution

      The EM interface is the “extension cord” that connects the tester to the probe card.  It provides a way to quickly connect and disconnect the two big elephants (tester and prober) and also provides a “consumable” component that protects delicate contacts on expensive testers and probe cards. 

       When there were only a handful of signal and supply lines and the maximum test speed was less than 10 MHz, it was OK to connect the tester to the probe card with long cables and connectors.  This “cable-out” technique ran into trouble when the test speeds started to go above 100MHz.  At that point, even the best coaxial cable begins to look lossy and provides unacceptable levels of delay.  The next advancement was the creation of the POGO tower.

      POGO Tower Interface

      Figure 2: POGO Tower Interface

      Figure 2 shows a typical Tester-to-Probe Card interface, widely in use today.  The Device Under Test (DUT) interface converts the form factor of the connection points in the tester to a more standard connection pattern that matches the connection pattern on the probe card.  Initially, DUT interface boards were just space transformers but in modern systems, additional components are placed there to provide signal pre-conditioning specific to the device being tested [5]. Figure 3 shows a typical system setup.

      Advantest POGO Interface resized 600

      Figure 3: Typical POGO Interface.  Image courtesy of Advantest

      The POGO tower is a mechanically precise arrangement of spring contacts, top and bottom, with controlled-impedance connections in-between.  Since the mechanics and electronic characteristics of the tower are well known, the test system can compensate for insertion losses and signal.  For the latest high-performance device applications, POGO-based interconnects have run out of gas.

      The limiting issues for POGO schemes are path length and signal transitions.  A POGO tower adds upwards of 4 inches to the signal path length, but the bigger issue is the number of physical connection points between the tester pins and the probe card.  Each point is a discontinuity with an insertion loss.  Now that supply values are in the 1.2V range, the difference between logic high and low is measured in 10s of millivolts and the connection losses just eat away at the total budget.

      The latest change to interface schemes is the all-in-one DUT interface/probe card that is a game changer in terms of the knock-on implications to prober and equipment designers.  I’ve been designing automatic test equipment and tester interfaces for the semiconductor industry for most of my career and this change has presented some of the most interesting engineering challenges I’ve encountered in a long time.

      Check out my previous blog on Advantest shipping it's 1,000th test system http://www.glewengineering.com/blog/bid/105274/Semiconductor-Test-Equipment-Supplier-Reaches-Huge-Milestone

       Published Oct 24, 2014.

      Next: Tester-Prober Interfaces: Direct Probe (Part 2) 

      http://www.glewengineering.com/blog/bid/105463/Tester-Prober-Interfaces-Direct-Probe-Part-2

      [1] http://www.pbs.org/transistor/science/events/silicont1.html

      [2] https://www.advantest.com/US/AboutAdvantest/History/index.htm

      [3] http://www.electroglas.com/company/history.shtml

      [4] Roberts, Gordon, and Mark Burns.  An Introduction to Mixed-Signal IC Test and Measurement.  New York: Oxford University Press, 2001.  Print.

      [5] http://www.evaluationengineering.com/articles/201107/reducing-the-cost-of-test.php 

      Next: Direct Probe and the System Interface Implication

      For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

      Semiconductor ATE Supplier Reaches Huge Milestone

       

      Advantest Ships 1000th Semiconductor Automatic Test System

      Semiconductor Automatic Test Equipment Advantest V93000V93000 Test System (image provided by Advantest)

      Advantest, a powerhouse in the semiconductor automatic test equipment (ATE) industry, recently shipped a milestone 1000th V93000 test system [1].  This really is a milestone in an industry that moves so fast that most pieces of capital equipment seem to have lifespans shorter than that of a mayfly.

      V93000 Steps Onto the Scene

      Verigy introduced the V93000 in 1999.  In those days I was running the engineering group at Electroglas, a big wafer prober company.  Intel was our major customer and keeping them happy seemed to be my primary job function.  We were called into a meeting and were shown the V93K probe card for the first time.  It was a monster compared to typical probe cards. 

      A probe card is the electromechanical interface between a device tester (like the V93K) and the IC devices to be tested on a wafer.  It is, in essence, a heavy-duty printed circuit board which routes signals from the device tester to a grid of microscopic needles (probes) which can then contact the connection points on an IC when it is still on the wafer and not yet packaged.  Devices can be tested electrically and bad devices can be detected before they are diced up and placed into fairly expensive packages saving the manufacturer time and money.

      The number of tester pins it has to connect to dictates the size of a probe card.  For a long time, an 8-inch diameter probe card had plenty of real estate for connecting to the tester.  With an 8-inch probe card one could route 128 test lines to the probe array and that seemed like enough for the time.  8-inch probe cards had been around for several years and all the wafer prober companies had built their systems around the expectation of mechanically interfacing to them.  Verigy saw the future, however, and knew that with the transition of the semiconductor industry to 300mm wafers would come larger devices with higher pin counts.

      V93000s Impact on the Semiconductor Industry

      The V93K test head was massive.  Its probe card was almost 12 inches in diameter and could handle more test pins which was very important to Intel since the future to them was bigger and faster microprocessors with many more connection points.  The impact to Electroglas and every other prober company was the need to mechanically interface with the monster probe card.  We were being told in no uncertain terms that our new model tool would have to accept the V93K probe card or Intel would not buy it.

      It was an exciting time for me personally because I got to preside over and direct the system specification and design for a generational change of our tool.  There were many challenges to grow a system, which had been dealing with 200mm wafers and 8-inch probe cards and make it handle 300mm wafers and 12-inch probe cards.  Which core technologies could we keep and which had to go?  How quickly could we produce a system?  Would I ever see my family again?  Verigy had made a bold move to a new standard in test and the knock-on implications rippled throughout the semiconductor industry.

      Why is the V93000 Still the Top Choice

      For more than 15 years, Verigy (now since acquired by Advantest in 2011) has pushed the envelope in the world of high-speed test.  They increased the pin density of their tester and the top speed of the electronics again and again to keep the V93000 system fresh and relevant.  Their commitment to innovation and the longevity of their flagship test system can be an example to us all.

      The semiconductor industry is now poised to transition yet again to 450mm diameter wafers and exactly the same challenges face the whole test and processing equipment industry.  If you are not critically examining your company’s offerings with an eye to 450mm you should be.  If you could use a system-level architect who has been through this process numerous times to help specify and manage the development of your next big thing, perhaps we can help.

      https://www.advantest.com/US/News/ADVP008934

      For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

      Mechanical Engineers Develop Squishy Robots

       
      Mechanical Engineers Develop Squishy Robots 

      The robotics industry is constantly taking advantage of new technology and thus is constantly growing and evolving.  The robots of today are no longer these big clunky machines that are not only dangerous to operate, but also dangerous to be around in general.  In fact, robots now not only work side-by-side with humans, but also are used to effectively perform medical procedures on people.  While all these advancements are amazing, people in the industry are constantly searching for ways to move the field forward.  This week mechanical engineers at MIT created a new material made from polyurethane foam and wax, which may find application for "soft" robots.

      New Squeezable Material

      MIT mechanical engineering professor Anette Hosoi and her former graduate student, Nadia Cheng, alongside researchers at several different institutes and universities have developed a new material that could allow robots to "squeeze through small spaces and then regain their shape" (Thilmany, 2014).  This advancement would be a huge step for the robotics industry, which is constantly striving to reduce the size of robots, and make them able to get into hard-to-reach areas.  

      This new material creates many new possibilities for how robots could be used in the very near future.  In the past, metal, plastic, wood, or composites have been the primary materials used for constructing robots.  The one thing these all have in common is that while they are extremely tough and durable, they are only minimally flexible.  This new material "made from wax and foam is capable of switching between hard and soft states" (Thilmany, 2014). 

      In order to even start this process, the researchers needed to trouble-shoot how they were going to create a soft material that was still controllable (a necessity when working with robots).  They were able to accomplish this by "coating a foam structure with wax" (Thilmany, 2014).  As we all know, foam can be easily squeezed into small spaces making it the perfect candidate for such an ambitious task.  Foam also has the ability to bounce back to its original shape and size after been squeezed into tight spaces or shapes.  The benefit of using wax is that it has a relatively low melting point, and is easily cooled.  According to Hosoi, "running a wire along each of the coated foam struts and then applying a current can heat and soften the surrounding wax".  Wax is and adaptable material.  If fracturing occurs, the wax can be reheated and then cooled, and the structure returns to its original form.  This provides some room for error without costing a fortune to repair what is already an expensive robot.     

      Building a Squishy Robot

       The process of building this new "Squishy Robot" began when "researchers placed a polyurethane foam lattice in a bath of melted wax, they then squeezed the foam to encourage it to soak up the wax" (Thilmany, 2014).  The foam works similar to a sponge in that it can absorb liquids.  This still makes one wonder though how the wax remains inside the foam lattice after it has been heated.  

      This clearly was something that came up in their research because on the second version of the foam lattice a "3D printer was used to allow them to carefully control the position of each of the struts and pores" (Thilmany, 2014).  This made the printed lattice more controllable than the original polyurethane foam model, but it also increases the cost. While the first version works, the printed version has the ability to be modified and refined through test analysis. 

      What Will Squishy Robots Be Used For

      With a robot that can squeeze into tight spaces and then regain its original shape, the possibilities of its use seem nearly endless.  I could see them being used by Police Departments to disable bombs, which would eliminate putting our officers in the line of fire. Another possibility that the engineers at MIT believe possible is having this "soft" robot used as a medical device to "move through the body to reach a particular point without damaging organs or blood vessels along the way" (Thilmany, 2014).  I can't wait to see what role this new squishy robot plays in our future.

      https://www.asme.org/engineering-topics/articles/robotics/squishy-robots?cm_sp=Home-_-HomeContent-_-Squishy-Robots

      For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 

      Thermal Analysis Devices Just Got More Affordable

       

       

      Crop Picture resized 600 Image: Finney County in Southwestern Kansas is now irrigated cropland where once there was short grass prairie.  NASA IR image with false color.  Photograph Credit: NASA/GSFC/METI/Japan Space Systems, and U.S./Japan ASTER Science Team

      Current Uses of Thermal Analysis Devices

      One of the benefits of our space program (apart from TANG®) has been the development of Infrared Detector Technology (IR).  Various thermal analysis cameras that can see from the near IR (around 800-1200 nm) to far IR (8-12 um) depending on their detector technologies have been a part of many public and not so public satellite programs that observe everything from crops, to images of your city, to Homeland Security-related stuff for decades.

      The government has pumped money into IR sensor technology through various agencies and we all get to benefit as the results get to market.  We can't get our hands on the super-secret defense cameras yet, but there are some cool new things coming to Amazon real soon. Thermal analysis cameras will soon be available for purchase by consumers.

      My Work With IR Cameras

      I have worked on IR microscopy and thermal imaging systems and analysis for years in order to see into the workings of semiconductor devices.  The systems I have worked on are complex combinations of high-accuracy motion systems, specialized optics such as Solid Immersion Lens (SIL) technology, and in the case of the most recent system, I architected a full wafer-level prober integrated with the diagnostic tool so that testing could be done at the wafer level.  

      Those interested in that system can see a paper I presented at the IEEE Semiconductor Wafer Test Workshop in 2012:

      http://www.swtest.org/swtw_library/2012proc/PDF/S08_02_Portune_SWTW2012.pdf

      It turns out that silicon is largely transparent (depending on doping) to near IR wavelengths. This allows for some really interesting diagnostic opportunities.  If you could see in the near infrared region and looked at the backside of a chip as it operates you would see what looks like a cityscape at night from space and depending on the magnification of the optics you could see all the way down to a single transistor blinking as it switches.  Such transitions are visible because as a transistor switches it passes briefly through its linear region and emits a few photons of IR energy.

      Static, bright spots can be heat signatures from power dissipation like shorts or heavy current draws.  Blinking spots result from the ON-OFF-ON transitions of flip-flops as each transistor slides briefly through its linear region on its way to a stable state.  With the right magnification optics it is possible to zoom in on individual cells and look for logic faults, stuck-at faults and crosstalk effects that result from subtle design rule violations.  If a system adds an IR laser, it can stimulate the circuitry and then changes in operating behavior can be seen.  The world of semiconductor failure analysis (FA) owes a lot to these systems.

      The heart of all these systems, from diagnosing bad ICs to seeing bad guys at night from space is the IR camera.  They have always been very expensive (our system camera is in the 10's of thousands of dollars) and in order to get decent S/N on the image they typically need to be cooled.  The best such cameras have traditionally used liquid nitrogen to get the sensor down to around 70K.  One of the big names in IR sensor camera technology in the U.S. is Raytheon.

      IR Imaging Comes to Consumers

      According to a recent journal publication from Raytheon http://www.raytheon.com/newsroom/technology_today/2014_i1/nextgen.html, new breeds of IR sensors that do not require cooling are becoming available.  Although sensors from Raytheon have traditionally been produced in very expensive, and very low quantities, Raytheon has partnered with Freescale Semiconductor to make these devices in mass quantities.

      This means that the consumer can have a useful, lowcost thermal imaging camera system.  Just this week, Seek Thermal http://www.obtainthermal.com, a Santa Barbara-based startup made a $199 IR Camera/Sensor accessory for smartphones available for purchase.  Their website illustrates some intriguing applications for the camera in a consumer environment.

      Specialized, high cost IR camera systems will continue to have a place in industry.  When you need to see individual photons and resolve spots down to the sub-micron level, only the most cutting edge camera will do.  For those of us in the industrial world, we can complete the circle by thinking of things to do with a really low cost IR camera in the factory.  For the price of one so-called industrial camera you can perhaps network 20 or so cheap ones and get better results.  Personally, I have a few ideas that I plan to pursue.  Stay tuned.

      For more information on Glew Engineering Consulting visit the Glew Engineering website, blog or call 800-877-5892 or 650-641-3019. 
      All Posts

      Current Articles | RSS Feed RSS Feed


      Write a blog article!

      Follow Glew Engineering

      Browse by Tag

      Subscribe by Email

      Your email: