FPGAs with built-in AES: The key to secure system designs

July 01, 2008

Story

FPGAs with built-in AES: The key to secure system designs

Choosing the correct encryption algorithm and selecting the appropriate key storage are two important design considerations.

Embedded systems can easily fall prey to hackers, security breaches, and malicious attacks unless effective security is incorporated into the system design. Security is an even greater issue today because new, proprietary technologies and valuable IP are used as competitive barriers. Up until now, the technology for implementing conventional security has been cumbersome, outdated, and costly. However, current trends are encouraging designers to embed the highest level of security in FPGAs for more efficient and less costly designs.

FPGAs that conform to the National Institute of Standards and Technology (NIST) Federal Information Processing Standard (FIPS) 197 support configuration bitstream encryption using the 256-bit Advanced Encryption Standard (AES) and a nonvolatile key. AES is the most advanced encryption algorithm available today. A user-defined AES key can be programmed into the 256-bit nonvolatile key stored in an FPGA device.

Choosing the correct encryption algorithm and selecting the appropriate key storage are two important design considerations. AES supports key sizes of 128, 192, and 256 bits and replaces the Data Encryption Standard (DES), which has 56-bit key and 64-bit data block sizes. Larger key sizes like AES equate to increased security and encrypt data faster than Triple DES (3DES). In effect, 3DES encrypts a document three times with three keys.

Encryption converts electronic data into an unintelligible form commonly referred to as ciphertext; decrypting the ciphertext converts data back into its original form or plaintext. The AES algorithm is a symmetric block cipher that encrypts/enciphers and decrypts/deciphers electronic data in 128-bit blocks. In this algorithm, symmetric keys are used for both encryption and decryption, and the block cipher processes data in blocks. Symmetric key block cipher encryption algorithms are used in many industries because they provide high security protection and efficiency, ease of implementation, and fast data processing speed.

The choice of key storage is the second most important design consideration. The key is stored in either volatile or nonvolatile storage, depending on the chip vendor. Once power for volatile storage is off, the key is lost unless an external battery is connected to the chip as a backup power supply. On the other hand, nonvolatile key storage gives the designer greater flexibility.

For example, the embedded nonvolatile key in an FPGA can be programmed either on or off-board. The security key is stored in poly fuses inside the FPGA. Poly fuses are nonvolatile and one-time programmable, meaning this storage approach is more reliable because no external backup battery is needed.

Poor reliability is the biggest problem batteries pose for volatile storage. Battery life is affected by temperature and moisture levels. When the battery dies, the key is lost. As a result, the device can no longer be configured, and the equipment must be returned to the vendor for repairs and key reloading. Also, battery backup cost is higher because it is more difficult to manufacture, requiring more components, board space, and engineering work.

Batteries usually cannot stand the high temperature reflow process and must be soldered onto the board afterwards, which incurs an additional manufacturing step. Volatile key storage also requires the key to be programmed into the device after it is soldered on the board.

Because nonvolatile storage is one-time programmable, the key is tamperproof. That’s not possible in volatile storage because the battery can be removed and the FPGA can be configured with a regular encrypted configuration file.

Designing security into a system

Figure 1 shows how security is implemented in Altera’s Stratix III FPGA using Quartus II design software. The first step is programming the security key into the FPGA. The design software requires 256-bit user-defined keys (Key 1 and Key 2) to generate a key programming file. Then the file with the information from Key 1 and Key 2 is loaded into the FPGA through the JTAG interface.

Figure 1


21

 

Next, the AES encryption engine built into the FPGA generates the real key used to decrypt configuration data later in step three. The real key, created by encrypting Key 1 and Key 2, is then processed by a proprietary function before being stored in the 256-bit nonvolatile key storage.

In step two, the configuration file is encrypted and stored in external memory. The design software requires the two 256-bit keys (Key 1 and Key 2) to encrypt the configuration file. The Quartus II AES encryption engine generates the real key by encrypting Key 1 with Key 2. The real key is used to encrypt the configuration file, which is then loaded into external memory, such as a configuration or flash device.

Thirdly, the FPGA is configured. At system power-up, the external memory device sends the encrypted configuration file to the FPGA. The 256-bit nonvolatile key in the FPGA is processed by the inverse of the proprietary function to generate the real key. The AES decryption engine then uses the real key to decrypt the configuration file and configure itself.

Security break-ins

As part of the design process, system designers must identify and understand the different types of security breaches, including copying, reverse engineering, and tampering, as shown in Table 1.

Copying involves making identical copies of a design without understanding how it works. Copying can be accomplished by either reading the design out of the memory device or capturing the configuration file when it is sent from the memory device to the FPGA at power-up. The stolen design can then be used to configure other FPGAs. This approach constitutes a primary form of IP theft and can lead to significant revenue loss.

Reverse engineering entails analyzing the configuration file to re-create the original design at the register transfer level or in schematic form. The re-created design can then be modified to gain a competitive edge. This is a more complex form of IP theft than copying and usually requires significant technical expertise. It is also time- and resource-intensive and sometimes requires more work than creating a design from scratch.

Tampering involves modifying the design stored in the device or replacing it with a different design. The tampered device might contain harmful design code capable of causing a system to malfunction or steal sensitive data.

Most nonvolatile FPGAs have a feature that permits configuration data to be read back for debugging purposes, as shown in Figure 2. Designers can usually set security bits for the device. When security bits are not set, readback is allowed and obtaining configuration data is straightforward. But when security bits are set, readback is disabled. One way to conduct a readback attack when security bits are set is to detect where security bits are located in the FPGA and deactivate them to enable readback.

Figure 2


22

 

Setting up intrusion barriers

Some FPGAs make it virtually impossible for attackers to steal IP from highly secured embedded designs. In particular, detecting and deactivating security bits can be difficult, thus providing designers greater defense against copying. The following discussion explains how designers can set up those security defenses.

Poly fuses storing the security keys are hidden under layers of metal among hundreds of other poly fuses. It is nearly impossible to determine a particular fuse’s functionality by simple visual inspection. The programming status of the poly fuses used for other functions can be different from device to device.

This randomness makes it more difficult to identify which fuses store the security key. Also, even if the poly fuses storing the security key are identified, the real key used for decryption is not revealed because it is processed by the proprietary function prior to storage. Without knowing the real key, the design cannot be decrypted.

These FPGAs are thus secure against readback attacks because they do not support configuration file readback. This prevents attempts to read back the configuration file after it is decrypted within the FPGA. Furthermore, these designs cannot be copied by programming the security key into another FPGA and configuring it with an encrypted configuration file. Two 256-bit keys are required to program the security key into the FPGA. Because AES is used to generate the real key, it is virtually impossible to generate Key 1 and Key 2 from the security key.

Reverse-engineering a design from the configuration file is difficult and time-consuming as well, even without encryption. The FPGA configuration file contains millions of bits, and the configuration file formats are proprietary and confidential. To reverse-engineer a design requires reverse-engineering the FPGA or design software being used to reveal the mapping from the configuration file to the device resources.

Reverse-engineering these FPGAs is more difficult than reverse-engineering ASICs. Standard tools are not readily available to reverse-engineer these FPGAs, which are manufactured on a 65 nm advanced process technology node. In fact, reverse-engineering just one FPGA logic block can take a significant amount of time and resources. Configuration bitstream encryption makes reverse engineering even more challenging. Finding the security key to decrypt the configuration file is as complicated as copying it; thus, it might be easier and quicker to create a competitive design from scratch than attempt to reverse-engineer a secured FPGA design such as this.

Nonvolatile keys are one-time programmable to guard against tampering. After the FPGA is programmed with the key, it can only be configured with configuration files encrypted with the same key. Attempts to configure the FPGA with an unencrypted configuration file or a configuration file encrypted with the wrong key result in configuration failure. A configuration failure signals possible tampering, whether in the design’s external memory during transmission between the external memory and the FPGA or during remotely communicated system upgrades.

Design option comparisons

Besides the aforementioned FPGA security system, other design options available to designers include SRAM-based FPGAs limited to 3DES encryption, flash-based FPGAs, and antifuse-based FPGAs. Table 2 describes the cost of attacks in each case.

Nonvolatile FPGAs retain their configurations when the power is off. One way to reveal device configuration is to probe or detect each nonvolatile cell’s programmable state. Two side-channel attacks on a flash-based FPGA are electron emission detection and transistor threshold voltage change.

An attack via electron emission detection first involves removing the device’s package to expose the die. Next, the device is placed in a vacuum chamber and powered up. The attacker then uses a transmission electron microscope to detect and display emissions. As for the second technique, a transistor’s threshold voltage changes over time because of electron accumulation in the floating gate. This causes the transistor’s threshold voltage to rise gradually.

In addition to these two side-channel attacks, another popular version, the power attack, involves measuring an FPGA’s power consumption to determine which function the device is performing. As for a readback attack on flash-based FPGAs, the amount of effort required varies from vendor to vendor and depends on how well security bits are protected in the device. Moreover, probing each flash-based FPGA’s floating gate takes a great deal of time and effort because the gate does not physically change after programming. The state, which is isolated by oxide, is determined by the existence or amount of electrons on the floating gate between the select gate and substrate (see Figure 3).

Figure 3


23

 

Furthermore, reverse-engineering a flash FPGA configuration file is not easy because the configuration file must first be obtained. This is a difficult task to accomplish because the attacker must perform copying before reverse engineering. It is also important for designers to know that tampering with a flash-based FPGA is easy because the device is reprogrammable. A tamperproof mechanism therefore must be used if tampering is a concern.

Programming state probing is also used for attacking antifuse-based FPGAs. Techniques include Focused Ion Beam (FIB) technology and Scanning Electron Microscope (SEM). FIB is used for microscope imaging and cross-sectioning the device, while SEM involves microscope imaging using raster-type scanning to detect secondary electrons emitted from the surface. Analyzing an antifuse-based FPGA’s programming state is extremely time-consuming, given the millions of antifuse links and the small percentage programmed.

Improved risk management strategies

Designers must estimate total security costs and make trade-offs to determine the level of security that is right for the device under design. To achieve a high level of security, designers must analyze potential threats, consider the probability of attack given a particular set of vulnerabilities, and set up effective and appropriate defenses. FPGAs offer several reliable security schemes that enable designers to implement less costly strategies for managing risks.

Altera Corporation
408-544-7000
[email protected]
www.altera.com

 

Altera Technical Altera (Altera)