This material may be excerpted, quoted, or distributed freely provided that attribution to the author (Carey Sublette), the document name (Nuclear Weapons Frequently Asked Questions) and this copyright notice is clearly preserved, and the URL of this website is included: Nuclear Weapon Archive
Only authorized host sites may make this document publicly available on the Internet through the World Wide Web, anonymous FTP, or other means.
Unauthorized host sites are expressly forbidden.
The only authorized host site for the NWFAQ in English is the Nuclear Weapon Archive (nuclearweaponarchive)
Since the various design elements of a thermonuclear weapon combine to form a complex integrated system, discussing the design space of these weapons involves complicated tradeoffs between design objectives and has many possible design variations.
In an attempt to address this in some kind of orderly fashion I first sketch out several basic structures for the overall weapon, in rough order of increasing sophistication (Subsection 4.5.1 Principle Design Types). Following this, I address a series of possible tradeoffs and the issues connected wit each.
4.5.1 Principle Design Types The descriptions of weapon designs, and the developmental sequence described is speculative, but it is consistent with all facts about weapons, weapon development programs, and physics of which I am currently aware.
4.5.1.1 Early Designs The earliest radiation implosion designs seem to have used a single large cylindrical chamber encompassing both the primary and cylindrical secondary. The casing was hemispherical at one end, where the primary sphere was located. The thermonuclear weapon was integral to the bomb casing itself - i.e. the ballistic shell of the bomb was the support structure for the radiation case, and the physical structure that held the entire thermonuclear device together.
Both the US and UK initially used casings made of steel, which were lined with lead or lead bismuth alloy to form the radiation case (probably 1-3 cm thick). The secondary pusher, which made up the inner wall of the radiation channel, was made of either natural uranium or lead (possibly as a lead- bismuth alloy). Operational bombs probably all used uranium tampers to maximize yield, but some test devices were equipped with lead tampers to hold down yield and fallout production. A massive radiation shield (uranium or lead) was located between the primary and secondary to prevent fuel preheating by the thermal radiation flux. A boron neutron shield was used in some designs to reduce neutron preheating.
The secondary stage consisted of the exterior pusher/tamper, a standoff gap, and a cylinder filled with fusion fuel. Lithium deuteride, highly enriched in Li-6, was the preferred fuel for maximum yield but early shortages in lithium enrichment capacity lead to the deployment of bombs containing partially enriched lithium (40% and 60% Li-6 in the U.S.), or natural lithium. Down the axis of the fusion fuel cylinder was a solid (or nearly solid) rod of plutonium or HEU for the spark plug.
The design approach of these early bombs followed that of Mike and the test devices exploded during Castle: the use of a standoff gap to create the necessary gradual compression required a large diameter (Mike was 80 inches wide, all of the Castle series devices had diameters from 54 to 61.5 inches). The rapid energy release from the primary followed by a relatively lengthy implosion required a thick casing for radiation containment, making the entire bomb very heavy. Mike weighed an anomalous 164,000 pounds, but even the Castle devices all weighed in between 23,500 and 40,000 lb.
These early bombs were thus quite massive, and had high yields. The Mk 17 and Mk 24 (the weaponized version of Castle Romeo, using unenriched lithium deuteride) had a diameter of 61.4 inches and a weight of 42,000 lb (yield: 15-20 megatons). The relatively compact and light Mk 15, whose development was completed somewhat later (and used 95% Li-6 deuteride), still had a diameter of 34.6 inches and weighed over 7,000 lb (yield: 3.8 megatons). And all of these weapons *were* bombs, since no missile could carry them. In fact, only the very largest aircraft could carry them - one per plane.
Although the primaries used in these bombs were much improved over early fission designs, they were still relatively massive initially. The TX-5 primary used in the Mike device still weighed in at well over 1000 kg, and the comparatively thick tamper and explosive layers delayed the escape of both photons and neutrons significantly, by up to 100 nanoseconds.
4.5.1.2 Modular Weapons
During the fifties the diameter of the bomb casing and the primary shrank as US and Soviet weapons became more compact, partly driven by improved primary designs. Lighter weight megaton-range weapons were desired for greater flexibility in the types of aircraft that could carry them, and for increased payload. Light weight high yield weapons were especially important for the early ICBMs, which had limited payloads, and low accuracy. Only a light weight, high yield weapon would give a reasonable chance of destroying a designated target when carried by an ICBM. It was also useful if the same basic weapon design could be used in different weapon systems (bombs, ballistic missiles, cruise missiles, etc.)
This led to a modular approach to the weapon system. Instead of the aerodynamic casing of the delivered munition, the electronics, and the "physics package" being a single integrated entity - these three things were separated. The nuclear warhead proper (the "physics package") was self- contained, except for a cable connector to the electronics that detonated the explosives, and fired the neutron generator. The electronics package was separate, and could be different for each type of weapon (especially important for the varying fuzing requirements). These two components could then be fitted into different bomb or missile bodies to create multiple types of deployable systems.
Since the warhead casing no longer needed to withstand the environmental rigors of the completed weapon, it could be made out of lighter and less rugged materials. This led to the use of a light casing (aluminum alloy, or even plastic) that was lined with a high-Z material to form a radiation case.
4.5.1.3 Compact Light Weight Designs More efficient implosion systems and the advent of boosting made primaries more compact and less massive without sacrificing yield of efficiency. At this point (which occurred in the U.S. around 1955-1956), there seem to have been different development paths available.
One path followed the existing design principles, harnessing the increased temperatures and pressures generated by boosted light weight primaries through greater radiation confinement by increasing the thickness of the radiation case at the primary end. This evolved into a separate radiation case for the primary, a spherical shell of uranium (for example) surrounding the high explosive shell of the implosion system, with an aperture for releasing the radiation into the secondary radiation chamber (the chamber made by lining the external casing). The energy absorbed by the primary case wall at a high temperature was reradiated as the temperature in the chamber dropped. This made confinement and channeling of the thermal radiation more effective. Baffles or other barriers could be added to modulate the energy transfer into the secondary radiation case.
It appears that an alternate path may have been followed by the US starting with the Hardtack I test series (although possibly first pioneered in Redwing). According to statements made by LLNL scientists Wood and Nuckolls, and LASL Director Bradbury, new design ideas were introduced at this time that extended the Teller-Ulam concept. This coincides with the development of the very light W-47 warhead for the Polaris missile (600 lb weight and 600 kt yield, later increased to 800 kt). I speculate that the design approach introduced here was the use of modulated primary energy release.
4.5.1.4 Two Chamber Designs
At some point, the development trend toward a separate radiation case around the primary lead to a full two chamber design for the weapon, with some means of regulating radiation flow between the chambers (like a temporary radiation barrier). With better control over the radiation flux around the secondary, a reduced standoff with a reduced secondary diameter (and perhaps a lighter pusher/tamper) became possible.
This could also be conveniently combined with a spherical secondary design. This has been described as the "peanut design" - two spherical hollow chambers joined at the waist, with a primary sphere in one, and a spherical sphere in the other. Alternatively, a two chamber - spherical secondary design can be used with a modulated primary.
This approach offers the inherent advantages of spherical implosion - a smaller radius change for compression in 3-dimensions to attain a given density compared to two. Smaller radius change translates directly into faster implosion, an important consideration in a smaller, lighter, higher pressure weapon design which would be prone to disassemble faster.
In a spherical secondary the radiation shield between the primary and secondary would evolve into a baffle between the two chamber to prevent the primary from directly (and thus unevenly) heating the side of the secondary facing it, forcing the radiation flux to diffuse into the channel around the secondary.
The primary in a two-chamber design may be effectively encased in a heavy, close fitting uranium shell that can act as an implosion tamper. By trapping the explosive gases, this shell can act as the wall of a spherical piston, forcing the expanding gases to transfer all of their energy to the inward moving beryllium/plutonium shell, and minimizing the amount of explosive required. Such a primary may use a thin uranium or tungsten tamper between the beryllium and plutonium shell layers to enhance inertial confinement of the fissile mass.
4.5.1.5 Hollow Shell Designs
It was pointed out earlier that it is difficult to efficiently compress more than the outermost layers of a solid cylindrical or spherical fuel mass. In any case, only the outermost layers actually *need* to be compressed, since they contain the lion's share of the fuel mass. It would be logical then to dispense with the idea of using a solid fuel mass in the center, and only use a hollow shell of fuel in the first place. A hollow spark plug shell could be nested directly inside the fuel shell, but a second tamper layer may be included between the two.
A hollow shell could be used with either a cylindrical secondary (making it "totally tubular"), or with a spherical design.
Several advantages are obtained with this approach.
The fuel near the center that would be inefficiently compressed is eliminated, improving overall fuel utilization.
The addition of the dense second tamper or spark plug on the inner side of the fuel layer can also directly enhance compression. Whenever a shock reaches the inner side of the fuel, it will be reflected back into the fuel at higher pressure, compressing the fuel further. If the compression gradient is continuous, it will tend to "pile up" at the inner interface, with the same effect of compression enhancement. The dense inertial tamper on the inner side of the fuel layer will also help keep it at a constant high density.
Finally, the hollow shell design allows the spark plug to accelerate to very high velocities before it goes critical. The implosion velocity at criticality could be even higher than the average maximum implosion velocity for the secondary, due to the effects of thick shell collapse and convergence. An implosion velocity exceeding 1000 km/sec is conceivable. This is so fast that densities much higher than those achieved by high explosive systems would be attained before energy production from fission becomes high enough to halt implosion. Even relatively small masses of fissile material (< 1 kg) could be fissioned efficiently.
Hollow shell secondaries would be essential for use with primaries that rely on modulated energy release to create efficient compression.
4.5.1.4 High Yield and Multiple Staged Designs
The first thermonuclear devices were high yield by most any standard (10.4 Mt for Ivy Mike, 15 Mt for Castle Bravo). But they were also very heavy, and difficult to push to even greater yields. High yield weapons with greater yield-to-weight ratios, providing even higher yields in deliverable packages were desired.
As a rough approximation, we can say that the amount of energy required to implode a secondary is proportional to its mass, since the primary energy/secondary mass ratio defines the achievable implosion velocity. The yield of the secondary should also be roughly proportional to its mass. Thus there is a roughly proportional relationship between the primary and secondary yields, using similar design principles.
From available data (based on known trigger tests, and fizzles where only the primary fired), it appears that this range can be from 10-200, with 30- 50 being more typical ratios.
If a very large yield is desired, then we must obviously have a very large primary. Large fission primaries are expensive, heavy, and potentially dangerous (due to the large amount of fissile material present). Even in very heavy weapons, the yield of the primary is limited to no more than a few hundred kilotons, limiting total yield to a maximum of 10-20 megatons.
The high yield designs actually developed (mostly in the fifties and early sixties) seem to have used refined versions of the basic thermonuclear weapon design approach, as described above, with the addition of multiple staging to achieve even higher yields. The relatively light weight W-53 9 Mt warhead/bomb deployed by the US (still in service!), was one of the highest yield warheads the US ever deployed, and probably is a 3 stage weapon.
This is really large enough for almost any conceivable destructive use (except maybe blowing up asteroids). Nonetheless, military requirements for even larger weapons have been drafted, and in the case of the Soviet Union, actually built, tested, and deployed. At one point in the mid-fifties the US military requested a 60 megaton bomb! This military "requirement" was apparently driven by the fact that this was the highest yield device that could be delivered by existing aircraft. The Soviets eventually went on to develop a 100+ megaton design (tested in a 50 megaton configuration). To make such megaweapons, a bigger driving explosion is required to implode the main fusion stage. This has led to the design of three stage weapons, where a thermonuclear secondary is the main driving force to implode a gigantic tertiary stage.
Building gargantuan bombs is not the only motivation for adopting three stage weapons however. If the fusion neutrons are not harnessed to cause fission in the tamper (either because the bomb is intended to be very clean, or very dirty) then the ratio in yields between stages is correspondingly reduced - to a range of something like 10 to 15. This limits the practical maximum yield to 3 to 5 megatons. It may be doubted whether even this is much of a limitation since out of a current arsenal of over 10000 warheads, the US only has 50 bombs with yields over 3 megatons. In the fifties however this seemed unacceptably small, so "clean" weapons were deemed to require three stage design.
Three stage design can provide other advantages though. By offering the weapon designer additional freedom in design, it may be useful even if the bomb is not especially large, clean, or dirty. For example, in optimizing a weapon to minimize weight for a given yield, a designer can consider which type of driver for the main stage is the lightest - a large fission primary or a compact two stage device. If weapon-grade fissile material is very precious, then a two-stage driver might be chosen simply to minimize the over utilization of this material.
In a three stage weapon the radiation cases for the secondary and tertiary might be kept separate initially. The primary would implode the secondary but a barrier would prevent energy from reaching the tertiary. This barrier could be designed to ablate away during the secondary implosion, so that when the secondary energy release occurred, it would have become transparent.
Alternatively it may be useful to harness a portion of the primary's energy to create an initial weak compression shock in the tertiary to enhance compression efficiency.
4.5.2 "Dirty" and "Clean" Weapons
Whether to make a fission-fusion weapon into a fission-fusion-fission weapon is one of the most basic design issues. A fission-fusion weapon uses an inert (or non-fissionable) tamper and will obtain most of its yield from the fusion reaction directly. A fission-fusion-fission weapon will obtain at least half of its yield (and often far more) from the fusion neutron induced fission of a fissionable tamper.
The basic advantage of a fission-fusion-fission weapon is that energy is extracted from a tamper which is otherwise deadweight as far as energy production in concerned. The tamper has to be there, so a lighter weapon for a given yield (or a more powerful weapon for the same weight) can be obtained without varying any other design factors. Since it is possible to do this at virtually no added cost or other penalty, compared to an inert material like lead, by using natural or depleted uranium or thorium there is basically no reason not to do it if the designer is simply interested in making big explosions.
Fission of course produces radioactive debris - fallout. Fallout can be reduced by using a material that does not become highly radioactive when bombarded by neutrons (like lead or tungsten). This requires a heavier and more expensive weapon to produce a given yield, but is also considerably reduces the short and long term contamination associated with that yield.
This is not to say that the weapon is "clean" in any commonsense meaning of the term. Neutrons escaping the weapon can still produce biohazardous carbon-14 through nitrogen capture in the air. The primary and spark plug may still contribute 10-20% fission, which for a multi-megaton weapon may still be a megaton or more of fission. Significant contamination may also occur from the "inert" tamper radioisotopes, and even from the unburned tritium produced in the fusion stage. Reducing these contributions to the lowest possible level is the realm of "minimum residual radiation" designs discussed further below.
During the fifties interest in both the US and USSR was given to developing basic design that had both clean and dirty variants. The basic design tried to minimize the essential fission yield by using a small fission primary, and spark plug sizes carefully chosen to meet ignition requirements for each stage, without being excessive (note that although only part of the spark plug will fission to ignite the fusion stage, the essentially complete fission of the remainder by fusion neutrons is inevitable). These weapons appear to have all been three-stage weapons to allow multi-megaton yields (even in the clean version) with a relatively small primary. The dirty version might simply replace the inert tamper of the tertiary with a fissionable one to boost yield.
The three-stage Bassoon and Bassoon Prime devices tested in Redwing Zuni (27 May 1956, 3.5 Mt, 15% fission) and Redwing Tewa (20 July 1956, 5 Mt, 87% fission) are US tests of this concept. Clearly though, the second test was not simply a copy of the first with a different tamper. The fusion yield dropped from 3 Mt to 0.65 Mt, and the device weight increased from 5500 kg to 7149 kg between the two tests. The inference can be made that the tertiary in the first used a large volume of relatively expensive (but light) Li-6D in a thin tamper, which was replaced by a heavier, cheaper tertiary using less fusion fuel, but a very thick fissionable tamper to capture as many neutrons as possible.
The 50 Mt three stage Tsar Bomba (King of Bombs) tested by the Soviet Union on 30 October 1961 was the largest and cleanest bomb ever tested, with 97% of its yield coming from fusion (fission yield approximately 1.5 Mt). Assuming a primary of 250 kt (to keep the fissile content relatively low for safety reasons), we might postulate secondary and tertiary stages of 3.5 Mt and 46 Mt respectively. This fusion stages would require 1700 kg of Li6D (at 50% fusion efficiency), and something like 250 kt of fission for reliable ignition. If the initial spark plug firings were 25% efficient, later fission would release another 750 kt - placing the total at 1.25 Mt (close enough to the claimed parameters to match within the limits of accuracy).
This was a design though for a 100-150 Mt weapon! A lead tamper was used in the tested device, which could have been replaced with U-238 for the dirty version (thankfully never tested!).
4.5.3 Maximum Yield/Weight Ratio
Except for safety, the weight of a weapon required to provide a given yield is the most important design criterion. In the years since the first nuclear weapon was exploded, far more money has been spent in building nuclear weapon delivery systems than in the weapons themselves. The high cost of delivery for what is basically a rather small package is due to the fact that nuclear delivery systems are generally intended to be used only once. Clearly this is true for missiles, but it is true for bombers as well since recovery and reuse is not part of their nuclear mission profile.
Since the cost of the delivery vehicle is much greater than the cost of the warhead, making the warhead as light as possible for the intended yield quickly came to dominate the weapon design process. this is normally expressed in terms of the yield-to-weight (YTW) ratio (kt/kg).
Naturally it is easier to get a high ratio for a larger bomb. The highest ratio for any warhead in the US arsenal is the 9 Mt Mk-53/B-53 bomb, which happens to be the oldest weapon in service (operational since 1962), but also the largest. At 4000 kg, it has a ratio of 2.25 kt/kg. The Tsar Bomba, as tested, had a ratio of 1.7 kt/kg (its weight was 30 tonnes). As *designed* it had a ratio of 3.4-5 kt/kg!
Table 4.5.3-1. Yield-to-Weight Ratios of Current US Weapons
Weapon YTW Ratio Yield(kt)/Weight(kg) In Service Date Mk-53 2.25 9000/4000 1962 W-88 1.5 475/330 W-80 1.31 170/130 B-83 1.10 1200/1090 W-87 1.0 300/300 W-78 0.96 335/350 W-76 0.61 100/165
The much earlier W-47 warhead seems to have achieved ratios of 2.2-2.7 kt/kg. However YTW ratio is not every thing. The W-87 and W-88 are said to use reduced amounts of expensive nuclear materials (deemed important when ambitious expansion of the US nuclear arsenal was planned in the early eighties) which, coupled with the much larger payloads of the MX and Trident II missiles, may account for the reduced (but still quite respectable) YTW ratios of these warheads.
Part of optimizing the YTW ratio is careful weight management. Very light weight primaries, the use of light weight weapon cases, and multiple radiation cases are innovations to minimize weight. Since the tamper is one of the heaviest parts of the weapon, squeezing as much energy out of this is very important too.
The end of surface testing of nuclear weapons after the atmospheric test ban treaty effectively removed "cleanliness" as a significant concern for designers. Complaints about fall-out vanished, and so did the ability of the international community to monitor weapon design through fall-out analysis. The cost-effectiveness of lighter weapons put great pressure on designers to extract weight saving however they could, and it is likely that the idea of using non-fissile tampers disappeared very quickly. There is scant evidence that so-called "clean" designs were ever deployed in any quantity.
The fission yield of the tamper can be increased even further by adding slow-neutron fissionable material to it. Basically this means using enriched uranium instead of natural or depleted uranium.
Highly enriched uranium is definitely known to be used in U.S. weapons. About half of the U.S. inventory of weapons-associated HEU is less than "weapons grade" (<93.4% that is). The probable use of most or all of this uranium (generally with an enrichment of 20-80%) was in thermonuclear weapon tampers.
The W-87 Peacekeeper warhead (to be redeployed on the Minuteman-III) has a current yield of 300 kt, that can be increased to 475 kt by adding a HEU sleeve or rings to the secondary. Whether this represents an actual addition to the existing secondary, or whether it replaces an existing unenriched sleeve is not known. The W-88 Trident warhead is a closely related design, and has a current yield of 475 kt indicating that it is already equipped with this addition. The 175 kt yield difference amounts to the complete fission of 10 kg of U-235.
Now, once one considers using substantial amounts of HEU in the secondary, the question of why the fusion fuel is needed at all arises. The answer: it probably is not essential. The idea of imploding fissile material is what set Stanislaw Ulam on the path to that led eventually to thermonuclear weapons. But with the availability of large amounts of HEU, and the trend toward smaller weapon yields (compared to the multimegaton behemoths of the fifties), the Ulam's idea of using radiation implosion to create a light weight high-efficiency pure fission weapon returns as a viable possibility. It is an interesting question whether all modern strategic nuclear weapons *are* in fact thermonuclear devices!
4.5.4 Minimum Residual Radiation (MRR or "Clean") Designs
It has been pointed out elsewhere in this FAQ that ordinary fission-fusion- fission bombs (nominally 50% fission yield) are so dirty that they merit consideration as radiological weapons. Simply using a non-fissile tamper to reduce the fission yield to 5% or so helps considerably, but certainly does not result in an especially clean weapon by itself. If minimization of fallout and other sources of residual radiation is desired then considerably more effort needs to be put into design.
Minimum residual radiation designs are especially important for "peaceful nuclear explosions" (PNEs). If a nuclear explosive is to be useful for any civilian purpose, all sources of residual radiation must be reduced to the absolute lowest levels technologically possible. This means elimination neutron activation of bomb components, of materials outside the bomb, and reducing the fissile content to the smallest possible level. It may also be desirable to minimize the use of relatively hazardous materials like plutonium.
The problems of minimizing fissile yield and eliminating neutron activation are the most important. Clearly any MRR, even a small one, must be primarily a fusion device. The "clean" devices tested in the fifties and early sixties were primarily high yield strategic three-stage systems. For most uses (even military ones) these weapons are not suitable. Developing smaller yields with a low fissile content requires considerable design sophistication - small light primaries so that the low yields still produce useful radiation fluxes and high-burnup secondary designs to give a good fusion output.
Minimizing neutron activation form the abundant fusion neutrons is a serious problem since many materials inside and outside the bomb can produce hazardous activation products. The best way of avoiding this is too prevent the neutrons from getting far from the secondary. This requires using an efficient clean neutron absorber, i.e. boron-10. Ideally this should be incorporated directly into the fuel or as a lining of the fuel capsule to prevent activation of the tamper. Boron shielding of the bomb case, and the primary may be useful also.
It may be feasible to eliminate the fissile spark plug of a MRR secondary by using a centrally located deuterium-tritium spark plug similar to the way ICF capsules are ignited. Fusion bombs unavoidably produce tritium as a by- product, which can be a nuisance in PNEs.
Despite efforts to minimize radiation releases, PNEs have largely been discredited as a cost-saving civilian technology. Generally speaking, MRR devices still produce excessive radiation levels by civilian standards making their use impractical.
MRRs may have military utility as a tactical weapon, since residual contamination is slight. Such weapons are more costly and have lower performance of course.
This leads to another reason why PNEs have lost their attractiveness - there is no way to make a PNE device unsuitable for weapons use. "Peaceful" use of nuclear explosives inherently provides opportunity to develop weapons technology. As the saying goes, "the only difference between a PNE and a bomb is the tail fins".
4.5.5 Radiological Weapon Designs This is the opposite extreme of an MRR. Earlier several tamper materials were described that could be used to tailor the radioactive contamination produced by a nuclear explosion - tantalum, cobalt, zinc, and gold. Uranium tampers produce contamination in abundance - but quite a lot of energy too. In some applications it may be desired that the ratio of contamination to explosive force be increased, or tailored to a narrower spectrum of decay times compared to fission by-products.
Practical radiological weapons must incorporate the precursor isotope directly into the secondary. This is because the high compression of the secondary allows the use of reasonable masses of precursor material. In an uncompressed state, the thickness of most materials required to capture a substantial percentage of neutrons is 10-20 cm, leading to a very massive bomb. A layer of 1 cm or less will do as well when compressed by radiation implosion.
Some radioisotopes that would be very attractive for certain applications are difficult to produce in a weapon. A case in point is sodium-24, an extremely prolific producer of energetic gammas with a half-life of 14.98 hours. This isotope produces a remarkable 5.515 MeV of decay energy, with two hard gammas per decay (2.754 MeV and 1.369 MeV) and might be desired for very short-lived radiation barriers. The most obvious precursor, natural Na- 23, has a minuscule capture cross section for neutrons in the KeV range (although it is a significant hazard from induced radioactivity in soil after low altitude nuclear detonations). The best for precursor candidate for Na-24 is probably magnesium-24 (78.70% of natural magnesium) through an n,p reaction.
4.6.1 Weapon Safety
Due to their enormous destructive power, it is extremely important to ensure that nuclear weapons cannot explode at either their full yield, or at reduced yield, unless stringent and carefully specified conditions are met.
Weapons must be resist:
To meet these requirements elaborate provisions for weapon safety are required. This issue has been of major concern since the first nuclear weapons, and many of the major advances in weapon design are related to weapon safety.
Weapons are invariably designed with a series of disabling mechanisms, all of which must be successfully overridden before an explosion can occur. These include locking mechanisms requiring special keys or codes, redundant safeties that must be removed to arm the weapon, environmental sensing switches (disabling mechanisms that are overridden only when the weapon has experienced environmental conditions and stresses expected during operational employment), and sophisticated fuzing systems to detonate the device at the proper place and time. Often these multiple safety systems require cooperation by more than one person to complete weapon arming.
Scenarios that must be addressed include:
**** Unfinished ****
4.6.1.1 Safeties and Fuzing Systems
4.6.2 Variable Yield Designs
**** Unfinished ****
4.6.3 Other Modern Design Features
**** Unfinished ****
4.7 Speculative Weapon Designs
**** Unfinished ****
4.8 Simulation and Testing
**** Unfinished ****