Why are spacecraft data systems obsolete at launch?

45

6

One might think that spacecraft would be on the cutting edge of technology. However, when looking over details of spacecraft, it seems their computer systems are often very much behind the times. For example, the Curiosity rover was launched in 2011, when consumer laptop systems ran in the GHz and had GB's of memory. Curiosity's CPU runs at 132 MHz and the system only has 256 MB of RAM (source: http://en.wikipedia.org/wiki/Comparison_of_embedded_computer_systems_on_board_the_Mars_rovers ). I realize there may be some lag from obtaining the parts for the spacecraft before it is built and ultimately launched, but this seems extreme. Why don't spacecraft have more up-to-date data systems?

GreenMatt

Posted 2013-07-17T13:49:00.843

Reputation: 808

@GreenMatt The Curiosity (and the MER rovers) uses 32 bit processorsnos 2013-07-26T17:37:26.520

2Actually, most desktops (and even many laptops) are now 64 bit systems.Donald.McLean 2013-08-06T16:55:14.197

@Donald.McLean: True, but that was just an example (which I had some first hand knowledge of and which surprised me when I learned about it).GreenMatt 2013-08-06T21:18:36.630

@GreenMatt My point is that you made a clear and specific statement "32 bit processors are the commercial standard." and I am disputing that statement. Yes, it seems bizarre that many spacecraft are launched with outdated CPUs. In 1999, SM3A replaced the original Hubble computer with a 486 (six full years after the Pentium was released). However, Chad's point is still valid.Donald.McLean 2013-08-07T04:56:53.650

@Donald.McLean: When the example I was referring to was launched, 32 bit processors were normal for desktop systems. As for Chad's point, when Pentiums were the standard processors in desktops, most people considered 8086's to be obsolete; furthermore, I didn't ask "Why don't spacecraft use cutting edge data systems?"GreenMatt 2013-08-07T15:16:23.723

"One might think that spacecraft would be on the cutting edge of technology." I'm guilty of wanting more "Star Wars"-like future and less "2001". But no one hears you scream in space... Excellent question +1Eric Platon 2015-02-08T09:43:26.623

http://www.nasaspaceflight.com/2013/07/brains-sls-flight-computer-enters-build-phase/ is te story on them starting to build the Flight Computer for the SLS , now in 2013. So everything has been selected for use. Imagine how we'll think it's outdated when the SLS becomes operational. Or a decade into its operation.nos 2013-08-17T12:17:47.177

14not cutting edge != obsolete.Chad 2013-07-21T04:41:02.950

@Chad: True, but an 8 bit processor is ancient when 32 bit processors are the commercial standard for desktop systems.GreenMatt 2013-07-21T12:11:53.887

Answers

48

There are a number of reasons why spacecraft electronics typically lag what is commercially available by several years.

Radiation tolerance

Electronics are very susceptible to radiation phenomenon that terrestrial electronics are largely protected from by the Earth's atmosphere and magnetic field. Common radiation-based failure mechanisms are Single-Event Event/Upset (SEE/SEU) — most commonly thought about as a flipped bit, latch-up — where a bit gets stuck in a certain state and the part needs to be powered down to be reset, burn-out — where a high energy particle (e.g. proton or neutron) destroys the part, and total dose — where long-term exposure (rather than a freak event) degrades the part. As chips and circuits advance and pack transistors more tightly, the probability of these events increases.

Several techniques and testing methods exist to demonstrate if electronic assemblies are robust in space radiation environments, but these tests are expensive. So once they've been done for a part, component, or assembly, the trade is often made to live with less performance and save the cost of re-testing and avoid the risk of complete mission failure.

Reliability

It is easier to do maintenance on a terrestrial computer, and the costs of failure are often much lower than for spacecraft. Ground systems also don't have the same tight power, size, and mass budgets that space systems do, which limit the amount of redundancy that is feasible. A solution is to continue to use parts that have been shown to have high reliability. Another way to increase reliability is to perform parts screening and to perform lots of electronics testing (e.g. bake-out to find infant mortality, random vibration testing to mimic launch environments, shock testing to mimic pyrotechnic events like fairing jettison, and thermal vacuum testing to mimic space. This testing takes time and is expensive. The time delay alone puts most space systems at least one Moore's law cycle behind the latest consumer parts.

Build time for satellites

To say nothing of the avionics, satellites take a long time to build. Even when the computers are done the rest of the vehicle has to be assembled and tested. For large spacecraft this can take years. Meanwhile, the computer isn't getting any younger and an aversion (often justified) to risk means upgrading it would require many of these tests to be re-done.

Power consumption

Over time Moore's law helps chips to increase in processing power and decrease in power consumption, but generally speaking, when comparing contemporaneous parts more powerful chips consume more power. Spacecraft are almost universally power starved, so there's little incentive to use a more power-hungry chip than is absolutely necessary. Everything in a spacecraft is a trade-off: a Watt of power used for the main flight computer carrying around unused cycles is a Watt that can't be used for RF communications, or providing power to a payload (when that payload isn't communications), etc.

Paperwork

Paperwork and process can be as dominate as any of the other reasons. The aerospace industry as a historically high barrier to entry. Once reason is the human capital required to build and launch spacecraft, but equally as important is the space heritage of the software and components that go into them. Space environments are more challenging that terrestrial ones in a variety of ways and often require unique solutions (for avionics, rejection of heat without convective cooling is a good example). Launch environments were discussed above. Qualification of components is a real-world hardware-centric task, but there is a paper trail that backs up this analysis and provides confidence to a spacecraft builder's customers and the launch provider that the vehicle will be safe during ascent and that it will operate in space. This is proven through a combination of test and analysis and demonstration, but most of the people who care don't witness or oversee these activities directly, so they rely on excellent paperwork to provide that confidence. Once you've gone through the trouble of getting buy-in on widget X — the effort associated with a obtaining Δ buy-in for widget Y or even X+ is harder to justify if the older part still works. Aerospace suppliers (prime contractors and all the way down the supply chain) are often also required to have rigorous quality processes in place — i.e. more paperwork and process. All of this slows down the pace of innovation and change for in exchange for predictability.

Launch delays

Then once the spacecraft is ready it needs to be launched, and launches can slip months if not years.

Adam Wuerl

Posted 2013-07-17T13:49:00.843

Reputation: 3 167

The old link is dead and Agile Aerospace has moved.

Adam Wuerl 2016-09-25T22:10:50.053

1i think you missed a big one in power consumption more powerful chips use more power.Chad 2013-07-21T04:42:21.063

2And perhaps the biggest reason of all: PAPERWORK! It takes years and mountains of paperwork to get a particular piece of hardware "space-qualified". By the time that product is space-qualified, the related consumer technology has raced lightyears ahead...robguinness 2013-07-24T12:34:35.877

1

For what it's worth, despite these reasons above I do think a paradigm shift is coming precisely because advances in avionics are making small spacecraft more powerful and affordable, which because of their size and complexity are cheaper and quicker to produce and mitigate many of the issues above. In fact, I wrote a whole post about Agile Aerospace.

Adam Wuerl 2013-07-24T14:02:15.710

30

A big part of it is reliability. NASA could probably put in an Intel Xeon chip made in 2012 that has a crazy high amount of processing power.

However, the chip that was used, the RAD750, has years of experiments and usage behind it, such as being used in a variety of spacecraft including:

  • Deep Impact comet chasing spacecraft, launched in January 2005 - first to use the RAD750 computer.
  • XSS 11, small experimental satellite, launched April 11, 2005
  • Mars Reconnaissance Orbiter, launched August 12, 2005.
  • WorldView-1 satellite, launched Sept 18, 2007 - has two RAD750s.
  • Fermi Gamma-ray Space Telescope, formerly GLAST, launched June 11, 2008
  • Kepler space telescope, launched in March 2009.
  • Lunar Reconnaissance Orbiter, launched on 18 June, 2009
  • Wide-field Infrared Survey Explorer (WISE) launched 14 December, 2009
  • Solar Dynamics Observatory, launched Feb 11, 2010
  • Juno spacecraft, launched August 5, 2011
  • Curiosity rover, launched November 26, 2011

Because of use since '05, NASA can be fairly confident that the chip won't fail due to radiation problems, etc.

Why? Well, I would say that John Besin's answer summed it up pretty well, and I won't try to top it:

I wouldn't think this would be the case at all. If anything, NASA would want to use hardware (and software) that has been extensively tested throughout years of use, both in NASA and in industry as a whole. The last thing NASA wants is to find a bug in a spacecraft's system at an inopportune moment, and when you’re talking about devices that need to travel potentially hundreds of thousands of miles through space, there are many inopportune moments.

Undo

Posted 2013-07-17T13:49:00.843

Reputation: 9 644

Henry Spencer (Well known in newsgroups) has commented that with care, you can use non-space rated parts. But that care is interesting. Need redundancy, and ability to do fast recovery from faults. Which is hard. (He worked on a microsat using only commercial parts as I recall).geoffc 2013-07-17T16:07:17.807

Yep. Goes back to power, and complexity of design; something more powerful, but untested, needs a backup system in case it fails, otherwise you just wasted hundreds of millions of dollars on space junk. That backup has to be able to assume complete control of the craft at a moment's notice, so it has to be well-integrated, and that can create other weak points in the design, so those have to be redundant; eventually you're putting two computers in one spacecraft, each powered on but one just watching the other, and that's a luxury given most spacecraft's power systems.KeithS 2013-08-09T19:38:20.427

28

One might think that spacecraft would be on the cutting edge of technology.

I wouldn't think this would be the case at all. If anything, NASA would want to use hardware (and software) that has been extensively tested throughout years of use, both in NASA and in industry as a whole. The last thing NASA wants is to find a bug in a spacecraft's system at an inopportune moment, and when you’re talking about devices that need to travel potentially hundreds of thousands of miles through space, there are many inopportune moments.

You might also find this question on Programmers.SE interesting; it addresses the programming languages, hardware, etc. used to construct the Mars Curiousity rover.

Also, I imagine that the lower-spec hardware NASA uses has lower power requirements than cutting-edge, high-powered hardware. For example, if the rover doesn't need a faster processor to operate, why waste space and weight by powering such a processor when a lower-spec one will suffice?

John Bensin

Posted 2013-07-17T13:49:00.843

Reputation: 381

1NASA (and most other space agencies) have a ratings system -- the TRL (Technology Readiness Level), to rank things that are well known & flight tested vs. experimental technology. If you build a mission around too much unproven technology, you run the risk of delays, cost overruns, etc.Joe 2013-07-20T03:28:44.750

Hundreds of thousands of miles? That gets you to the Moon, give or take. Make that more like hundreds of millions of miles, rather; at least that'll get you to Mars.Michael Kjörling 2014-02-08T16:01:39.433

8

Another big reason is there simply isn't a need to do anything more powerful. There exist many applications on Earth where reliability is more important than speed. For instance, a vending machine contains a simple computer. You don't want that to crash and take your money.

The vast majority of the processing used today by computers is in the graphical interface. As there is no satellite that runs a graphical interface, it doesn't really make that much of a difference.

The point of a satellite's computer is to keep the satellite alive, pointed in the right direction, manage power, and collect data for use on the ground. Thus, they don't need to have gigahertz processors, they just need to be a data pipe. They need to do this with a high level of precision. You can't go and push the power button on a spacecraft, you need its systems to work flawlessly at all times.

Computers are regularly used on the ISS by the astronauts, but these are done for non-critical systems. It's only when the computer has to do significant processing of the data that the speed matters, and except for some compression, most of that is still done on Earth. Furthermore, most of the image laden systems out there have custom on board chips that assist with processing the images quicker, allowing less of the work to be done on the main processor.

PearsonArtPhoto

Posted 2013-07-17T13:49:00.843

Reputation: 67 296

1In addition to compression, digital signal processing can benefit from significant processing power. Such may be done on specialized hardware, but such might still count as part of the "computer".Paul A. Clayton 2013-07-18T23:03:28.070

Uses for more processing power and memory can be easily found even when GUI's are not an issue. Data compression, improved handling of unexpected condtions, etc.GreenMatt 2013-07-19T15:42:44.167

3

There is an anime called "Rocket Girls" in which the protagonist asked this same question. The answer she got was that they only use Classic Technology; technology that has built up a reputation of success over time. This is true for medicine and general aviation as well. In fact this is true for most branches of engineering, its primarily software engineering that keeps using the "latest" stuff.

Also, CMOS is more susceptible to Radiation than TTL, so when you are doing Radiation Hardening it may be better to have a slow 100 mhz TTL based chip than a fast 3.4 Ghz CMOS based chip.

user39

Posted 2013-07-17T13:49:00.843

Reputation:

RAD 750 is built with CMOS technology...Quonux 2013-07-30T18:28:39.610

2

A couple of things I might add to the good answers already here:

  • Selection timeframe. The decision as which hardware to use for a vehicle is made long before (years?) the vehicle is launched. Thus, at launch it is probably obsolete.
  • Radiation hardening. Often, these comparisons focus on one or two specs that are interesting for terrestrial uses: CPU clock speed and RAM for example. While these are important, fault tolerance in a radiated environment is more important while flying by Jupiter than it is when playing Doom. This tolerance creates a trade-off that doesn't help the other specs.

Erik

Posted 2013-07-17T13:49:00.843

Reputation: 7 422

1

Same thing happens in aviation as you've indentified for space technology. Major factors would be reliability, "hardness" and development time frames, but there are other considerations.

Any life-critical system has to be trustworthy, and when you can't get at it to fix it if it breaks (like robotic space probes), reliability becomes paramount also. The longer a thing has been around and accumulated experience, the more it can be trusted. Also, the more complex a system is, the more difficult it can be to verify that all "working parts" are working as they should. The newest technologies are always pushing boundaries of one form or another, challenging the limits of what can be done. That can put a thing on the edge of catastrophe - not a good place to be when lives are on the line. Newer computing technology is always more sophisticated (more complex) than what it replaces, making verification/validation more difficult.

Airplanes and rockets operate in harsh environments; the vehicles themselves create harsh or perhaps extreme environments for some of their own components. It's difficult to build electronic components and systems which can operate in such conditions - temperature, shock/vibration, EMI, radiation, etc. without some challenge to reliabilty.

It takes a long time (years) for a new aircraft or space system to make it from the initial design to "first launch", and design of subsystems (including those employing computers) has to be frozen at some point in the process. Computer technology moves much faster, so the designs get frozen with what is trustworthy (perhaps already becoming obsolete), and computer technology marches further along before the aircraft or rocket takes flight.

It really might not be a wise thing to try to do it any other way. When your life is in the balance, much better to have an old, crude but dependable system than something new and snazzy, but not fully proven.

Anthony X

Posted 2013-07-17T13:49:00.843

Reputation: 7 976

1

  • Selection Time: spacecraft are designed and built years prior to launch. The selected processor at build time, even if top of the line, will have been eclipsed by launch time.
  • Vibration Tolerance: launch of spacecraft requires vibration tolerant computer systems; many newer processors are not yet rated at design time.
  • Radiation Resistance: Smaller circuits are more subject to radiation induced errors than larger circuits. most more advanced processors use smaller circuitry to reduce energy costs, thermal loads, and operation cycle times.
  • Price: Older processors can be bought for far less than current leading edge processors; prices drop noticeably once patents expire.
  • lack of need: Not all satellites need highly robust processing solutions.
    The entire Apollo mission was run with a processing power equivalent to a couple of high end linux workstations... this includes the mainframes at JSC and Cape Kenedy. The on-board computer on Apollo was about as powerful as many digital watches. (80kB total memory; that's 37kB words of 2B each for ROM, plus 2K words of RAM.) It ran at 1 MHz, rather fast for its day. I've bought $20 calculators with better specifications than the AGC.
    The tasks of most satellites can be reliably run with older processors without mission compromise.

aramis

Posted 2013-07-17T13:49:00.843

Reputation: 9 491

Price? Within the total cost of most spacecraft, the price of the processor(s) is insignificant. As I commented to another answer, a use could always be found for extra processing power and memory.GreenMatt 2013-07-21T00:27:08.250

@GreenMatt Some projects, especially NASA projects, have to have expensive proof-of-capability testing; the venerable Zilog Z-80, Intel 8080 and Motorolla 68000 are well established microcontrollers for a variety of applications, and have passed mission rating for vibration and radiation many years ago. THe cost of mission rating a processor, assuming that it would pass the vibration and radiation tests in the first place, is on the order of $100,000 last I read (and that was in the late 1990's), just to do the destructive tests. Using an already rated processor saves testing expense.aramis 2013-07-21T00:39:51.157

Most NASA spacecraft cost on the order of hundreds of millions of dollars and some are in the billions; $100K is pretty insignificant in such a budget.GreenMatt 2013-07-21T01:05:34.677

You've obviously never dealt with Federal bean-counter types. They'll niggle over a $50K program, while approving a $30K toilet seat.aramis 2013-07-21T01:41:36.980

My experience is irrelevant to this discussion, but since you bring it up, just how much first hand experience do you have?GreenMatt 2013-07-21T02:26:58.493

I was a federal employee for 3 years (National Archives), and worked for a federal grant recipient for 6 years prior to that. PLENTY of experience with federal bean-counters. Plus, my dad was a project manager for the USAF (GM16 level)... My experience with NASA is solely as one who follows it, but mention of processor expenses has in fact been mentioned in several project papers over the last 15 years. Keep in mind: a $10 processor, when space rated, is close to $10,000... because they can charge that for ones they guarantee will survive launch.aramis 2013-07-22T03:51:47.023

It's quite possible to work as a federal employee and on federal grants without needing to deal with "bean-counter types". Also, I suspect you have some bias through which you see those "bean-counter types" that - in my experience anyway - is erroneous. While I've never been a federal employee, I've worked on government contracts - mostly NASA projects - for a lot more than the total of 9 years you cite. I've NEVER seen a budget analyst over ride an engineer or scientist when it came to crucial parts; if there's a financial shortfall they usually try to find a way to make things work.GreenMatt 2013-07-22T13:30:30.730

I've seen it doecumented in records of the Army Corps of Engineers. I've seen it repeatedly in the records of the US Forest Service, as well. And the Bureau of Indian Affairs Educational System. (Neat thing about archival work - you get to skim the records as part of your job.) The bean counters picked some of the stupidest things to delete. In any case, space rated (or even aviation rated) versions of inexpensive items, even if no different, are usually considerably more expensive than off the shelf non-flight-rated.aramis 2013-07-22T21:21:43.967

@GreenMatt I work for a large and notable company that deals almost exclusively with government contracts, including some NASA project, and I agree with aramis that cost concerns often come into play over small "bean-counting" issues in billion-dollar projects.

While a system may have a billion-dollar price tag, every system is composed of individual subsystems and parts, and each of those has a separate budget. Small (in comparison to the overall billion) expenses don't drop off the radar. – called2voyage 2013-07-24T13:13:07.457

0

Interestingly, this does not apply to all spacecraft. The Flock satellites by Planet Labs are actually pretty cutting-edge as stated by one of the developers on The AmpHour podcast. In fact, the testing of new satellite designs was slowed down by the time it took to actually launch the satellites once they were manufactured.

I suggest listening to the podcast, this episode was quite interesting.

JohnEye

Posted 2013-07-17T13:49:00.843

Reputation: 125