Sunday, December 7, 2014

ARM TrustZone technology - from Monitor Mode to Dedicated Security Co-Processing and the Secure Element(s)

"A design that places sensitive resources in the Secure world and implements robust software running on the secure processor cores can protect assets against many possible attacks, including those that are normally difficult to secure, such as passwords entered using a keyboard or touch-screen. By separating security-sensitive peripherals through hardware, a designer can limit the number of sub-systems that need to go through security evaluation and therefore save costs when submitting a device for security certification." - ARM.com

NOTE: There are variations in how software is implemented in the secure world - from a simple synchronous library of code to a full-blown operating system.

The execution of the normal OS and secure OS is interleaved over time via a context-switching mechanism called monitor mode. Monitor mode is responsible for time-slicing the execution of the normal OS and secure OS via context switching the state of each world on the physical processor. Monitor mode is explicitly triggered via a dedicated instruction or special type of exception. The explicit methods by which monitor mode is triggered contrast the typical scheduling algorithms that trigger context switching in modern-day preemptive operating systems.

There are varying levels of complexity regarding how the physical hardware in which the secure world runs is designed. These range from both worlds running on the same physical processor core to the secure world running on a completely separate processor core.

Another type of physical hardware design entails an additional microprocessor that is separate from the main processor. The secure world software stack (secure OS and secure applications) runs on a dedicated co-processor. This design is not exclusive to a secure OS running on the main ARM processor. The normal OS still runs on the main ARM processor, and a secure OS can still run on the main ARM processor if the main ARM processor has ARM TrustZone technology. A different secure OS and secure application software can run on the dedicated co-processor.

Client applications running on a secure OS can communicate with the main ARM processor via a set of APIs and commands. There are certain benefits to the secure OS always running on a dedicated security processor core or co-processor.

The operating system that runs on the co-processor can be optimized for just the co-processor. There are many types of dedicated co-processors. The ARM SecurCore microprocessor is one type of dedicated co-processor. ARM SecurCore microprocessors are used in systems that require dedicated processors for security-sensitive applications such as SIM cards, e-Government, Banking, and Identification. Designs that incorporate ARM SecurCore microprocessors can realize multiple key benefits including build performance improvements, energy efficiency, and physical security. Designing and building an operating system for a single chip means that the operating system can be built to use all of the features and only those features that the chip provides.

In summary, here are the key points:
  • Operating system software runs on the main ARM processor or application processor. Software applications run on the application processor. ARM processors in ARM-based mobile phones may or may not have TrustZone processor security technology. If the ARM processor has TrustZone processor security technology, then it may or may not be used.
  • There are additional processors on mobile phones that act as dedicated security co-processors. These include the secure element on the UICC or SIM card (UICC-based SE) and the secure element that has been soldered on the printed circuit board. The secure element that has been soldered to the printed circuit board is called the embedded SE. The iPhone 6, iPhone 6 Plus, Samsung S5, Galaxy Nexus, Nexus S, Nexus 7, Sony Xperia series, and a host of others contain a secure element soldered onto the printed circuit board. If the phone has a secure element that has been soldered onto the printed circuit board, then it is most likely contained within the packaging of a larger SoC that also contains the Near Field Communication (NFC) Radio-Frequency (RF) controller. Last but not least, it is entirely possible that the phone contains a secure element on the microSD card.
  • The embedded SE and UICC-based SE run a trusted OS. Trusted applications run on top of the trusted OS. In contrast to the trusted OS that runs within the secure world on a main ARM processor with ARM TrustZone processor security technology, the trusted OS that runs on the embedded SE and UICC-based SE does not share full hardware peripheral or direct normal world software access on the main application processor.
  • The UICC-based SE and embedded SE are protected by cryptographic keys. Client software applications running on the trusted OS in a processor with ARM TrustZone architecture security extensions are also protected by cryptographic keys.
  • There are multiple standards bodies that have established APIs, architecture documents, design documents, and so forth for the trusted operating system and trusted applications that run on the UICC-based SE and embedded SE. These entities are also responsible for the hardware interface on the physical secure element.

ARM TrustZone technology - a Few Good Boards

ARM provides the Juno ARM Development Platform, a reference platform for software and hardware developers building systems based on ARM Cortex-A processors. This platform contains a board that houses an ARM Cortex-A57 processor and the ARM Cortex-A53 MPCore processor. Both processors are 64-bit and implement the ARMv8-A instruction set architecture (ISA). Developers can build a board support package for this board using OpenEmbedded/Yocto.

The Apple A7 and Apple A8 chips found in the iPhone 5c, iPhone 5s, iPhone 6, and iPhone 6 Plus are based on the ARM Cortex-A53 and the ARM Cortex-A57. Additionally, the Samsung Exynos 5433 Octa SoC contains the ARM Cortex-A57 and the ARM Cortex-A53 MPCore, while the Samsung Galaxy Note 4 has an 8-core Exynos 5433 processor.

The Nvidia TK1 development board, which has a quad-core ARM Cortex-A15 processor, is currently available for purchase. However, the latest board from Nvidia, known as "Denver", which is rumored to contain the ARM Cortex-A57 and the ARM Cortex-A53 MPCore, is not yet available for purchase.

The Freescale I.MX 6 processor has been widely adopted across various industries for a range of embedded products. Freescale offers the SABRE board for intelligent devices, a development board that features the I.MX 6 Quad-Core ARM Cortex-A9 processor. Boundary Devices also sells their variation of this board with the same ARM Cortex-A9 MPCore. Developers can build a board support package for both the Freescale SABRE board and the Boundary Devices board using OpenEmbedded/Yocto.

When working with ARM development boards, it is important to take into account a few critical features. Specifically, e-fuses should not be blown out of the box and should be left open. The fuses can be blown to fit a specific configuration.

Here is a quick overview of the processors and boards.

Processor
Manuf
ISA
Dev Board
TrustZone
ARM Cortex-A57 and ARM Cortex-A53 MPCoreARMARMv8-AJuno Ref PlatformYes
ARM Cortex-A15NvidiaARMv7TK1Yes
ARM Cortex-A15SamsungARMv7Arndale Exynos 5420Yes
ARM Cortex-A9 MPCoreFreescaleARMv7Freescale SABREYes
ARM Cortex-A9 MPCoreFreescaleARMv7Boundary DevicesYes
ARM Cortex-A9 MPCore + Zync 7000 FPGAXilinxARMv7Zed BoardYes
ARM Cortex-A9 MPCore + Zync 7000 FPGAXilinxARMv7Digilent                 
Yes

Friday, November 21, 2014

C++ - Generative Programming

C++ IOStreams are a powerful mechanism for transforming input into output. Most programmers are at least familiar with C++ IOStreams in the context of reading and writing bytes to a terminal or file.

When a file or terminal is opened for reading or writing by a process, the operating system returns a numerical identifier to the process. This numerical identifier is known as a file descriptor. In turn, the file or terminal can be written to by the process via this file descriptor. The read and write system calls, which are implemented as wrappers in libc, are passed this numerical file descriptor.

Many layers of abstraction reside on top of the read and write system calls. These layers of abstraction are implemented in both C and C++. Examples of C-based layers of abstraction are fprintf and printf. Internally, these functions call the write system call. An example of a C++-based layer of abstraction is the IOStreams hierarchy. Out of the box, most C++ compiler toolchains provide an implementation of IOStreams. IOStreams are an abstraction on top of the read and write system calls. When data is written to a terminal via an IOStream, the IOStream implementation calls the write system call. Lastly, these layers of abstraction handle things such as buffering and file synchronization.

In UNIX, everything is a file. Consequently, network devices, virtual terminals, files, block devices, etc., can all be written to via a numerical file descriptor - this in turn is why UNIX is referred to as having a uniform descriptor space. With this being said, the basic IOStreams and printf abstractions I mentioned above are not designed to be used with network sockets, pipes, and the like. The lower-layer read and write system calls can be used, but there are a number of functions that must be called before writing raw bytes to an open file descriptor that points to a network socket.

The additional functionality that is needed for communicating with network sockets, shared memory, and the like can be implemented in classes that are derived from the C++ iostream class. It is for this reason that the IOStreams classes are extended via inheritance.

Over the years, several popular C++ libraries have implemented classes that are derived from the base classes in the iostreams hierarchy. The C++ Boost library is a popular example. However, this has not always been the case. Going back to 1999, the Boost library did not exist, and there were one or two examples on the entire Internet as to how to properly extend the C++ IOStreams classes.

In 1999, the source code for the GNU compiler toolchain that is available on gcc.gnu.org was obtained, and a class hierarchy was derived to support sockets, pipes, and shared memory. The methods in the classes derived from the base classes in the iostreams library were designed to be reentrant and easy to use. Generative programming techniques and template metaprogramming were used to create objects that could be instantiated using familiar C++ iostreams syntax and semantics. The library created was called mls, and it was licensed under version 2 of the GPL.

Since 1999, Boost has come a long way. It provides support for cryptographic IOStreams, sockets, and all kinds of other fancy stuff. It uses generative programming techniques.The GCC compiler toolchain can be obtained from gcc.gnu.org. Ctags can then be used to dig into the internals of the IOStreams hierarchy. The following book is recommended: Generative Programming - Methods, Tools, and Applications.

The gcc compiler toolchain can be obained from gcc.gnu.org. Ctags can then be used to dig into the internals of the iostreams hierarchy.. The following book is recommended.
Generative Programming - Methods, Tools, and Applications

namespace mls
{
template<class BufType, int direction, class BaseType=mlbuf> class mlstreamimpl;
template<class Parent, class BaseType=mlbuf> class mloutputimpl;
template<class Parent, class BaseType=mlbuf> class mlinputimpl;
template<class BufType, int direction, class BaseType=BufType>
struct StreamConfig;
template<class BufType, int direction, class BaseType>
struct StreamConfig
{
typedef typename SWITCH<(direction),
CASE<0,mlinputimpl<mlstreamimpl<BufType, direction, BaseType>, BufType>,
CASE<1,mloutputimpl<mlstreamimpl<BufType, direction, BaseType>, BufType>,
CASE<10,mlinputimpl<mloutputimpl<mlstreamimpl<BufType, direction, BaseType>,
BufType>, BufType >,
CASE<DEFAULT,mlinputimpl<mlstreamimpl<BufType, 10, BaseType>,
BufType > > > > > >::RET Base;
};
}

Monday, October 24, 2011

Android Command Line Dev with VI

Notes on developing Android apps from *NIX command line.

Building an Android application from the command line with VI can save time. Here are some notes on setting up Vim w/ tags and code completion for Android development. The relevant Ant commands for building Android apps from the command line are included. The example includes the commands for building and installing an Android app that links to a dependent java library which resides outside of the project source tree (in this case, the lvl lib), along with a C shared library that resides in the local jni/ directory.

Useful Vim Plugins for Android Development
  • Tag List
  • Nerd Tree
  • VIM JDE
Setting up Vim JDE (vjde) requires a few configuration changes in order to work well with Android projects. First, download vjde.tgz version 2.6.18 from http://www.vim.org/scripts/download_script.phpsrc_id=16253

Place vjde.tgz in $HOME/.vim and tar -zxvf vjde.tgz from within $HOME/.vim. Change the permissions on $HOME/.vim/plugin/vjde/readtags as follows:

$ chmod +x $HOME/.vim/plugin/vjde/readtags

Open an empty editor: $ vim and enter the following in command mode:
:helptags $HOME/.vim/doc

:h vjde
will then pull up the help page.

That should take care of setting up vjde. Now cd to the Android project dir. Open a blank editor and input the following in command mode:
:Vjdeas .myproject.prj
:let g:vjde_lib_path='/<path_to_android_sdk_top_level_dir>/platforms/ \
<desired_sdk_target>/android.jar:bin/classes:build.classes'
:Vjdesave
:q!

Next, Open up a source file in the project and type :Vjdeload .myproject.prj in command mode (or script and/or add to .vimrc). Use <ctrl-x><ctrl-u> for code completion. For example: import android.<ctrl-x><ctrl-u> and a nice little dialog box for browsing the matching frameworks.

Next, run ctags over the java and native sources as follows:
$ ctags -R src gen jni
Once NERD tree and Taglist are placed in ~/.vim/plugin/, the following lines in .vimrc will allow the use of <ctrl-n> and 
<ctrl-m> to toggle the file explorer and visual tag list.
nmap <silent> <c-n> :NERDTreeToggle<CR>
nnoremap <silent> <c-m> :TlistToggle<CR>
Also, for a status line:
set statusline=\ %{HasPaste()}%F%m%r%h\ %w\ \ CWD:\ %r%{CurDir()}%h\ \ \ Line:\ %l/%L:%c
function! CurDir()
let curdir = substitute(getcwd(), '/Users/myhomedir/', "~/", "g")
return curdir
endfunction

function! HasPaste()
if &paste
return 'PASTE MODE '
else
return "
endif
endfunction
Vim should be good to go at this point. cd back to $HOME/src/myproject. This particular example accounts for a dependent Java library (the lvl) that resides outside of the project source tree, a shared library (which consists of a few C files natively compiled), and plain java source files in the appropriate src/com/ package subdir.

From within the top level project dir (assuming that Eclipse was used, otherwise, android create can be used ...),
$ android update project --name myproject --target <desired_sdk_target> \ --path $HOME/src/myproject
$ android update project --target <desired_sdk_target> --path $HOME/src/myproject \ --library ../lvl_lib_dir

Make sure to check project.properties to ensure that the android.library.reference.1 variable now contains the relative pathname of the lvl lib directory.

Assuming that jni/Android.mk and jni/Application.mk are appropriately setup for the shared library, run ndk-build from the top level project directory.
ant debug should now handle the build and debug version of the application package file.

Start up an Emulator and then install the app with a
db -r install bin/myproject-debug.apk or use ant install.
Next, open the Dev tools application in the emulator and configure the following: set wait for debugger and select the application for debugging.
Next, run ddms & and check the debug port. It should be 8700.
Subsequently, start the activity with
adb shell 'am start -n com.mycohname.myproject/.BaseActivityName'
And finally, connect via jdb from the shell with
$ jdb -sourcepath $HOME/src/myproject -attach localhost:8700
and start debugging.

Tuesday, September 6, 2011

Radius and 802.1X

Configure Radius and 802.1X.


1. Generate a new self-signed root CA, write the encrypted private key to CA/private/cakey.pem, and then write the Base-64,ASN.1-encoded, self-signed certificate to CA/cacert.pem. This certificate will be used for signing client and server certificates.# openssl req -new -x509 -extensions v3_ca -keyout CA/priv/cakey.pem -out CA/cacert.pem -days 730 -config openssl.cnf

# openssl x509 -in cacert.pem -noout -text
# openssl x509 -in cacert.pem -noout -dates
# openssl x509 -in cacert.pem -noout -purpose
# openssl x509 -in cacert.pem -noout -issuer
# openssl rsa -noout -modulus -in CA/priv/cakey.pem | openssl sha1
# openssl x509 -noout -modulus -in CA/cacert.pem | openssl sha1

Check the modulus and public exponent in the private key and certificate to make sure they match.# openssl rsa -noout -modulus -in CA/priv/cakey.pem | openssl sha1
# openssl x509 -noout -modulus -in CA/cacert.pem | openssl sha1

2. Export the root CA signing certificate to ASN.1, DER encoded format so that clients can import it.
# openssl x509 -in CA/cacert.pem -outform DER -out clientCerts/myRootCA.der

2a. Convert the DER encoded CA back to pem format and place in a .crt file so that Android can read it. (This is an extra, un-needed step as cacert.pem can be copied and renamed to .crt). (Android does not understand pem files so write the DER encoded certificate to PEM format in a file with extension .crt).
# openssl x509 -inform der -in clientCerts/myRootCA.der -out clientCerts/myRootCA.crt

3. Generate radius server certificate (i.e. signing request) and private key in unencrypted format.
# openssl req -new -nodes -keyout tempCerts/radius_key.pem -out tempCerts/radius_req.pem -days 730 -config openssl.cnf

4. Sign the radius server certificate. note: Microsoft clients require the creation of an xpextensions file. Add '-extensions xpserver_ext -extfile ./xpextensions' to the following command.
# openssl ca -out tempCerts/radius_cert.pem -infiles tempCerts/radius_req.pem -config openssl.cnf

5. Install the root CA signing certificate, Radius server private key, and Radius server signed certificate.
# cp tempCerts/radius_cert.pem /etc/radwl/certs/server/
# cp tempCerts/radius_key.pem /etc/radwl/certs/server/
# cp CA/cacert.pem /etc/radwl/certs/server/

6. Create the client certificate (i.e. signing request) and private key. note: match the output file names with the client identity or common name.
# openssl req -new -keyout tempCerts/myandroid_key.pem -out tempCerts/myandroid_req.pem -days 730 -config openssl.cnf

7. Sign the client certificate.
# openssl ca -out tempCerts/myandroid_cert.pem -infiles tempCerts/myandroid_req.pem -config openssl.cnf

8. Export the signed client certificate and private key to pkcs#12 format.
# openssl pkcs12 -export -in tempCerts/myandroid_cert.pem -inkey tempCerts/myandroid_key.pem -out clientCerts/myandroid_cert.p12 -clcerts

9. Install the signed client certs on the Radius server.
# cp tempCerts/*_cert.pem /etc/radwl/certs/clients

10. Copy the client pkcs#12 certificate to appropriate device.
# cp clientCerts/myandroid_cert.p12 DEVICE

11. Copy the CA signing certificate to the same device.
# cp clientCerts/myRootCA.crt DEVICE

12. on OS X, use the following commands to add the freeradius user to the freeradius group. Also run chsh freeradius and set the shell to /sbin/nologin
# dscl . append /Groups/freeradius GroupMembership freeradius

Tuesday, August 16, 2011

OpenSSH Security - Client Configuration

OpenSSH provides a suite of tools for encrypting traffic between endpoints, port forwarding, IP tunneling, and authentication. The below instructions outline a client side OpenSSH configuration where the client is running on OS X. The built in firewall, ipfw, is enabled on the client to restrict outbound and inbound traffic. Part II (currently on hold) of this guide will cover the configuration of OpenSSH on the server along with the available options and alternatives for authentication, authorization, and traffic encryption. The configuration will force AES 256 in Counter Mode and will restrict the available Message Authentication Algorithms that may be used between endpoints. Most of the options in the ssh configuration file on the server will be disabled, public key authentication will be used, password authentication will be disabled, and the ssh daemon will bind to a high number port. Multiple SSH sessions will use the same connection via the ControlMaster and ControlPath client configuration directive. Also, a server certificate will be generated and used to sign user public keys. The CA signed user public keys constitute a user certificate which the server will in turn use for client authentication. PF will be used on the server for stateful packet filtering, connection blocking, and connection throttling.

First and foremost, the client has ipfw enabled and the firewall ruleset is configured in /etc/ipfw.conf. ipfw has been configured to block all inbound traffic and block all outbound traffic except for the ports and IP addresses that are necessary for connecting to the OpenSSH server. The server is running FreeBSD 8.2.

FreeBSD 8.2 - sshd on a.b.c.d:21465 pf | <--------Internet----------> | ipfw OS X Lion - ssh client
To start with, install coreutils and apg on the client. coreutils and apg can be obtained from Mac ports and can be installed as follows:

client: $ sudo port install coreutils
client: $ sudo port install apg

Before generating a public/private keypair, generate a strong passphrase for the private key. It is important to store this passphrase in a secure location, not on a computer.

client: $ openssl rand -base64 1000 | shasum-5.12 -a 512 | apg -M SNCL -a 1 -m 20 -x 20

Depending on the version of OpenSSH (should be using latest stable for the OS), ECDSA may be used in addition to DSA and RSA. Certificates may also be used for user and host authentication. See the ssh-keygen man page for details. Generate the keypair using the following command. When prompted for the passphrase, use the output from the above command.

client: $ ssh-keygen -b 4096 -t rsa -C"$(id -un)@$hostname)-$(gdate --rfc-3339=date)"

Here is an example of how to use ssh-keygen to generate a public/private keypair using the Eliptic Curve Digital Signature Algorithm. Both the client and server must be running a version of OpenSSH >= 5.7.

client: $ ssh-keygen -b 521 -t ecdsa -C"$(id -un)@$hostname)-$(gdate --rfc-3339=date)"

Now, we need to push the public key to the server and place it in the authorized_keys file of the user that we are going to log in as over ssh.
The ssh-copy-id command can be used to automate this process. On the OS X client, the ssh-copy-id command does not come preinstalled with SSH. The ssh-copy-id command can be obtained from http://www.freebsd.org/cgi/cvsweb.cgi/~checkout~/ports/security/ssh-copy-id/files/ssh-copy-id?rev=1.1;content-type=text%2Fplain

After downloading the script, change its permissions and place it in the path.
At this point, the server should be running OpenSSH on port 22 with the default configuration. Transfer the public key with the following command:

client: $ ssh-copy-id -i ~/.ssh/id_xxxyy.pub bryan@a.b.c.d \

It is time to setup connection sharing. Create the following file if it does not currently exist.

client: $ ls -l ~/.ssh/config -rw------- 1 bryan scclp 104 Aug 13 10:55 config


The file should contain these lines.

ServerAliveInterval 60 Host a.b.c.d ControlMaster auto ControlPath ~/.ssh/sockets/%r@%h:%p


The goal is to only allow connections to the server in AES 256 Counter mode, with umac-64 or hmac-ripemd160 MACs, and compression, on a non-standard SSH port from a designated IP range using public key authentication. Connections will also be throttled and SSHGuard along with a few custom PF rules on the server will be used to block and log attackers. The commands that the client will use to connect to the server will look like this:client:


$ alias sshconnect="ssh -l bryan a.b.c.d -p 21465 -C -c aes256-ctr -m umac-64@openssh.com,hmac-ripemd160 client:
$ alias sshtunnel="ssh -v -ND 8090 bryan@a.b.c.d -p 21465 -C -c aes256-ctr -m umac-64@openssh.com,hmac-ripemd160 client:
$ alias sshmonitor="yes | pv | ssh -l bryan a.b.c.d -p 21465 -C -c aes256-ctr -m umac-64@openssh.com,hmac-ripemd160 \"cat > /dev/null\"" client:
$ alias sshportforward="ssh -f bryan@a.b.c.d -p 21465 -C -c aes256-ctr -m umac-64@openssh.com,hmac-ripemd160 -L 15478:localhost:15479 -N" client:
$ alias sshportforward2="ssh -f bryan@a.b.c.d -p 21465 -C -c aes256-ctr -m umac-64@openssh.com,hmac-ripemd160 -L 17293:localhost:17294 -N"

Alternatively, Ciphers, MACs, and compression can be specified in the user config file as follows:

ServerAliveInterval 60
Host host.name.com
ControlMaster auto
ControlPath ~/.ssh/sockets/%r@%h:%p
Port 21465
User bryan
Ciphers aes256-ctr
Compression yes
MACs umac-64@openssh.com,hmac-ripemd160
StrictHostKeyChecking yes


User and Host certificates provide a more convenient method of authentication for multiple clients (users) and servers (hosts). Certificate revocation can also provide an easier method of quickly invalidating user access.A certificate authority key pair is first generated as follows. The ca is then placed in the /etc/ssh directory on the host.


ca $ ssh-keygen -t ecdsa -b 521 -f user_ca server $ sudo mv user_ca* /etc/ssh/

On the client, generate a public/private key pair and then copy the public key to the server so that it can be signed with the ca. Make sure to set the validity period of the certificate. Alternatively, a host key may be signed with a ca key that is stored in a PKCS11 token. OpenSSH supports ca keys stored PCKS11 tokens. Check the version of SSH and see ssh-keygen for more information.client


client $ ssh-keygen -t ends -b 521 -f ~/.ssh/id_ecdsa
client $ scp .ssh/id_ecdsa.pub bryan@server-ca:~/user_public_keys
server-ca $ ssh-keygen -s /etc/ssh/user_ca \
-O source-address=clientip
-O permit-pty
-O no-port-forwarding
-O no-user-rc
-O no-x11-forwarding \ -V -1d:+52w1d -z 6739301351 -I "bryan" -n bryan,clienthostname id_ecdsa.pub
id "bryan" serial 6739301351 for bryan,clienthostname valid from 2011-08-18T15:05:24 to 2012-08-17T15:05:24

Copy the signed user cert back to the client.


client $ scp bryan@server:~/user_public_keys/id_ecdsa-cert.pub ~/.ssh/

Setup TrustedUserCAKeys and AuthorizedPrincipalsFile files. Subsequently, set appropriate options in /etc/ssh/sshd_config on the server.


server-ca $ sudo cat /etc/ssh/user_ca.pub > /etc/ssh/trusted_user_ca_keys

Modify /etc/ssh/authorized_principals to include the following lines.bryan from="clientip" bryan
Modify /etc/ssh/sshd_config on the server to include the following lines

TrustedUserCAKeys /etc/ssh/trusted_user_ca_keys
AuthorizedPrincipalsFile /etc/ssh/authorized_principals

Now, restart sshd on the server and add an appropriate host configuration for certificate authentication to ~/.ssh/config on the client.

Last of all, setup a host certification via the -h option with ssh-keygen when signing a host key.

It is important to always keep OpenSSH updated with the latest, stable version that has been released for the operating system.

Thursday, March 3, 2011

Device Encryption in Android 3.0

Transparent encryption of block devices in Android 3.0.

The Motorola Xoom and several other new tablets on the market run Android 3.0, Honeycomb, which is built on the 2.6.36 Linux kernel. Most, if not all, of these Android tablets feature an Nvidia Tegra 2 processor. The 2.6.36 Linux kernel on these Android 3.0 Tegra 2 tablets introduces transparent, whole-disk encryption to everyday users. This encryption is provided by the dm-crypt device-mapper target in the Linux kernel, which creates a virtual layer on top of an existing block device and uses the crypto APIs in the Linux kernel for encryption and decryption of the underlying block devices.

Whether you are typing commands via a shell over a serial port or using the email application to check your email, the reads and writes to the file system are performed in the same manner with no changes to the upper-level applications.

After pressing the power button on the back of the Xoom tablet, the device boots up, and the user is presented with the desktop environment, from which he or she can choose to play a game, check email, or read an e-book. By tapping on Settings and then Location & Security, one can choose to "Encrypt tablet" from the screen. The encryption process takes approximately one hour, and the user is presented with a few basic screens upon completion.

After the encryption process is complete, the tablet is powered down. Upon rebooting the tablet, the user is prompted to input a PIN code, which is used to unlock the device. After entering the correct PIN code, the tablet powers up as normal, and the user can proceed with standard activities such as checking email, reading e-books, etc.

The Linux 2.6.36 kernel supports the device mapper framework, which allows virtual layers to be mapped on top of block devices for doing things like striping and mirroring. The device-mapper also provides a convenient target called dm-crypt, which is a device-mapper crypto target. The dm-crypt target provides transparent encryption of block devices.

Before the encryption operation, the output of the mount command shows the device name and mount point, indicating the partition where the user's data is stored, and this is the partition that will be encrypted.

/dev/block/platform/sdhci-tegra.3/by-name/userdata on /data type ext4 (rw,nosuid,nodev,noatime,barrier=1,data=ordered)
A few mount options to take note of:  noatime, barriers and data=ordered

...And after the encryption operation
/dev/block/dm-0 /data ext4 rw,nosuid,nodev,noatime,barrier=1,data=ordered 0 0

dmsetup will give us more information. As you can see from the below command, a dm-crypto device mapper target called crypt, has been setup in the kernel.  The dm-crypt target provides transparent encryption and decryption of data on the block device using the crypto APIs in the Linux kernel.

# dmsetup targets
crypt            v1.7.0
striped          v1.3.0
linear           v1.1.0
error            v1.0.1
# dmsetup status
datadev: 0 61326304 crypt           v1.0.1
Albeit the details surrounding key storage (see kernel source), supported ciphers (cat /proc/crypto), and hardware acceleration (see kernel source), here are some rudimentary performance tests that I ran before and after encrypting /data.  For the interested reader, there are some kernel level details related to the Tegra 2 processor which one can discover by going through the source code for the Linux 2.6.36 Tegra 2 branch.

The initial results of the the basic tests look good. There is a dedicated kernel thread for handling IO.  The read latency appears to be related to the kernel IO thread since reads on flash based storage devices can usually be performed in near constant time.

Unencrypted (2 GB Write - 104857 2k blocks)
/data/local/tmp # time dd if=/dev/zero of=ofile bs=2k count=1048572

1048572+0 records in
1048572+0 records out
2147475456 bytes (2.0GB) copied, 255.912521 seconds, 8.0MB/s
real    4m 17.25s
user    0m 0.73s
sys     0m 24.55s
Unencrypted (2 GB Read - 104857 2k blocks)
/data/local/tmp # time dd of=/dev/null if=ofile bs=2k count=1048572

1048572+0 records in
1048572+0 records out
2147475456 bytes (2.0GB) copied, 101.749864 seconds, 20.1MB/s
real    1m 41.79s
user    0m 1.15s
sys     0m 17.62s
Encrypted (2 GB Write - 104857 2k blocks)
/data/local/tmp # time dd if=/dev/zero of=ofile bs=2k count=1048572

1048572+0 records in
1048572+0 records out
2147475456 bytes (2.0GB) copied, 260.219584 seconds, 7.9MB/s
real    4m 26.94s
user    0m 0.64s
sys     0m 24.12s
Encrypted (2 GB Read - 104857 2k blocks)
/data/local/tmp # time dd of=/dev/null if=ofile bs=2k count=1048572

1048572+0 records in
1048572+0 records out
2147475456 bytes (2.0GB) copied, 124.291204 seconds, 16.5MB/s
real    2m 4.31s
user    0m 0.47s
sys     0m 7.74s