Pages

Monday, February 21, 2011

"Tracking" Down Credit Card Dumps

I have been examining numerous computers and media cards to support fraud investigations in the past year.  One group of thieves has been installing skimmers (devices that record magnetic stripe data) in the magnetic stripe readers on doors of certain banking institutions that limit access to their ATMs after hours.  Then the thieves install micro video cameras in the light fixtures of the ATMs to capture the users' pin codes.  The thieves later match the recorded pin code to the skimmed account number at the door, encode new cards, and pillage bank accounts by getting cash at another ATM location.

It is not difficult to find the ATM videos on the media cards from the cameras or on the computer hard drives.  The credit card/ATM card numbers can be bit tougher, however.  Searching for 16 digit numbers in a Windows computer will yield a huge volume of false hits.  And, if the operation has been running for a while, many of the account numbers will be deleted.   Searching unallocated space is necessary to be thorough and to ensure your suspects' get the full reward of their illicit activities.

All of the skimmers I have encountered are serial devices that communicate with software made for Windows-based computers by means of a USB-to-serial cable.  The cables do not look dramatically different than a standard USB cable, but they contain a chip that bridges the two protocols.  In a Windows computer, a USB driver for the cable must be installed before the program can communicate with the device.

The applications that communicate with the skimmer devices write the data to disk differently--some create a database while others use text files.  In my experience, both formats contain the complete track 1 and/or track 2 data.  Magnetic stripes contain three tracks, the first two in common use and encoded with particular standards.

Track 1 (IATA)

Track 1, known as "International Air Transport Association" format, has alphanumeric data.  It takes on the following format (from wikipedia):

Track 1, Format B:
  • Start sentinel — one character (generally '%')
  • Format code="B" — one character (alpha only)
  • Primary account number (PAN) — up to 19 characters. Usually, but not always, matches the credit card number printed on the front of the card.
  • Field Separator — one character (generally '^')
  • Name — two to 26 characters
  • Field Separator — one character (generally '^')
  • Expiration date — four characters in the form YYMM.
  • Service code — three characters
  • Discretionary data — may include Pin Verification Key Indicator (PVKI, 1 character), PIN Verification Value (PVV, 4 characters), Card Verification Value or Card Verification Code (CVV or CVK, 3 characters)
  • End sentinel — one character (generally '?')
  • Longitudinal redundancy check (LRC) — it is one character and a validity character calculated from other data on the track. Most reader devices do not return this value when the card is swiped to the presentation layer, and use it only to verify the input internally to the reader.
Targeting track 1 data in a grep search is very simple:
$ grep -E ^'%.[0-9]{16,19}\^' file


Track 2 (ABA)

Track 2, known as the "American Banking Association" format, contains only numeric data.  It takes on the following format (from wikipedia):

Track 2: This format was developed by the banking industry (ABA). This track is written with a 5-bit scheme (4 data bits + 1 parity), which allows for sixteen possible characters, which are the numbers 0-9, plus the six characters  : ; < = > ? . The selection of six punctuation symbols may seem odd, but in fact the sixteen codes simply map to the ASCII range 0x30 through 0x3f, which defines ten digit characters plus those six symbols. The data format is as follows:
  • Start sentinel — one character (generally ';')
  • Primary account number (PAN) — up to 19 characters. Usually, but not always, matches the credit card number printed on the front of the card.
  • Separator — one char (generally '=')
  • Expiration date — four characters in the form YYMM.
  • Service code — three digits. The first digit specifies the interchange rules, the second specifies authorisation processing and the third specifies the range of services
  • Discretionary data — as in track one
  • End sentinel — one character (generally '?')
  • Longitudinal redundancy check (LRC) — it is one character and a validity character calculated from other data on the track. Most reader devices do not return this value when the card is swiped to the presentation layer, and use it only to verify the input internally to the reader.
Targeting track 2 data in a grep search is also straight forward, though a reading of the above standard suggests the following search can miss.  I haven't seen any different start sentinels or separators than those in the expression:
$ grep -E ^';[0-9]{16,19}=' file
Though the preceding search has worked for me, the specifications for track 2 allow for other start sentinels and field separators.  A better but slower search would be:
$ grep -e ^'[:;<=>?][0-9]{16,19}[:;<=>?]'

Tracking Down Both Tracks at Once

I composed the following search to track down both tracks, and display the date and time of the swipe typically captured by the skimmer device.  The "-B1" argument shows one line of data before the hit, which is where the date and time are recorded.
$ grep -E -B1 ^'%.[0-9]{16,19}\^|;[0-9]{16,19}=' file
You may have noticed the caret ("^") preceding the single quoted search in each of the grep expressions.  It requires the expression to be matched at the beginning of a line of data.  This is a search optimization.  I will now demonstrate two additional optimizations, one for searching allocated files, and the other for unallocated space (using the the sleuthkit tool "blkls"):
Searching mounted file systems (find files, export 7-bit strings, grep for track data):
$ find . -type f | while read i; do strings "$i" |  grep -E -B1 -H --label="$i" ^'%.[0-9]{16,19}\^|;[0-9]{16,19}='; done
Searching unallocated space of a forensic image (export unallocated, filter out 7-bit strings showing offset, grep for track data)
$ blkls -o63 image.dd | strings -td  |  grep -E -B1 ^'%.[0-9]{16,19}\^|;[0-9]{16,19}='

Difficult Disk Imaging

In the past few weeks, I've had the opportunity to make forensic disk images of what one might call "non-standard" devices.  The devices were a Lenovo Thinkstation D20, an Acer Netbook, and an MacBook Air.

Lenovo Thinkstation

The Lenovo presented a few problems.  First, it was seized and disassembled by non-computer-forensics professionals.  Translated: the drives were removed and not marked as to their bays or cabling.  Two drives were identical in size and the third was over three times larger.  All were were SAS (Serial Attached SCSI) drives which have non-standard connectors.  I had no connectors to remove the drives and image them individually (though cannibalization from the Lenovo was possible), and the computer specs suggested that there was a raid array on the two drives of matching size.

What did I do?  I decided to use CAINE, a forensic boot disc, and an external hard drive.  CAINE would allow me to use the Lenovo for the specialized connectors needed for the SAS drives, and allow the hardware controller on the motherboard to reassemble RAID array.

The first step was to ensure I could boot the system with CAINE.  I was unable to boot from CD-ROM using the Lenovo's optical drive (which was unusual, to be sure) but I was able to get a USB version of CAINE booted.  I ensured, by adjusting the BIOS, that the USB would be the first device to boot.

I reinstalled and connect the drives, uncertain as to proper order, and booted CAINE.  Lucky for me, the on-board RAID controller detected the disks, reported there have been a change in the devices (the drive order), and then correctly reassembled the array.  CAINE reported two drives (the large disk, with the OS as it turns out), and the array.  I imaged both to the external drive with Guymager, a graphical front end for libewf, an open sourced disk imaging library and toolset that produces images in expert witness format.

Acer Netbook

The Acer Netbook was probably the least troublesome device, but did not lend itself well to disassembly.  Drive removal and hardware write-blocking are the ideals in forensic disk imaging.  However, this isn't always possible or convenient.  In the case of the Acer Netbook D255, there was no simple hard disk cover to remove.  Hard disk access appears to involve keyboard removal and an underlying cover, or seven case screws and an almost surgical separation of plastic catches.  Simply put, I didn't want to break the netbook, and I know that some storage devices have ROM chips that prevent them from being read when disconnected from the particular motherboard anyway.

Again, CAINE to the rescue.  In the case of the ACER, there was no boot menu.  Changes to the BIOS were needed to ensure the USB device booted before the internal hard disk.  I tried to boot CAINE with an attached USB optical drive and with a USB version of CAINE.  The ACER did not register the USB optical drive in the BIOS, but the USB flash drive with CAINE was detected.  I booted from the USB, mounted a external hard drive, and imaged the drive with libewf.

MacBook Air

This was my first encounter with the MacBook Air.  Like the Acer, the construction of the device discouraged disassembly.  I know that the Macs won't boot from a FAT formatted USB because of the EFI boot schema.  However, booting from CD-ROM is possible by pressing and holding the "C" key immediately after powering the computer.

I attached a USB CD-ROM drive because the AIR does not have an optical drive like other MacBooks.  I initially booted with CAINE, but the graphics drivers were incompatible with the Mac.  I attempted a graphics safe-mode boot and a text-only boot, but the same result: a garbled display that made proceeding impossible.

I obtained a second forensic boot disk called DEFT.  It is a newer release than CAINE and I hoped it had updated graphics drivers that might overcome the problem.  The initial boot froze the system.  DEFT boots into text mode, and there are no other menu choices.  However, a series boot options at the bottom of the boot screen reminded me of some boot issues I have experienced in that past several versions of Ubuntu, on which both these forensics distributions are based.  I passed the "nomodeset" option in the F6 menu (curiously named "Password"),  and DEFT booted to a text screen.  I was also able to boot to a GUI with the deft-gui command.

With this in mind, I revisited CAINE.  I have a preference for CAINE because I understand how it works and it's implementation of write-blocking and have tested it.  The CAINE developer, Nanni Bassetti, is ever-ready to help new users and explain his techniques.  I do not know how DEFT works and the information is not readily available, at least not in English.  This is not to disparage DEFT in anyway.  I'm just trying to highlight the fact that we must use tools that we understand and have tested.

I again booted the MacBook Air with Caine.  At the boot screen, there is no obvious way to pass boot options.  However, pressing escape brings up a boot command line.  Pressing tab displays the boot options on the original boot screen.  I passed the arguments "textonly nomodeset" and CAINE successfully booted to a console.  At the console, I was able to start the GUI with "startx".  I accomplished imaging as before, with libewf and an external USB hard disk drive.

Recovering Data from Deleted SQL records

I previously posted about parsing iPhone SMS database.  The particular focus was the recovery of deleted messages.  I explained there are really two types of deleted messages in play here: records flagged as deleted within the database (thus not really deleted at all) and records deleted from the database itself.  I discuss the second type of deleted data recovery here.

A SMS message deleted by the iPhone user is flagged in the database as "deleted." What happens next is not clear to me because I don't currently have an iPhone with which to experiment and I am not a sqlite expert.  I am uncertain if the database immediately deletes the record or if that occurs on sync (there are a couple of database "triggers" I don't yet fully understand).  If the data is only flagged deleted, then the record can be read with sqlite tools, which is what I discussing in my previous post.

But at some point a record can be deleted from the database, and as a result, it is not viewable with sqlite tools. So how do we find that data, and more importantly, how do we distinguish it from the non-deleted data?  It helps, at this point, to understand what happens to deleted records in sqlite.  When you delete a record, the space allocated to the record gets added to a free-list.  In other words, the size of the database doesn't get any smaller with record removal, but the space is marked as available for future records. This remains true until the database is "vacuumed."

A database can have it's free space removed with the conveniently named "vacuum" command.  This rebuilds the entire database, removing the space in the free-list and shrinking the database.  Sqlite can be compiled to do this automatically, but fortunately for us, this is not currently the case for the sqlite compilation in iOS.  We can use the vacuum command to help differentiate the data from the deleted records and non-deleted records, however.

The method I used was simple and would apply to any sqlite database, not just the iPhone sms.db.

  1. Make a copy of the sms.db: 'cp sms.db sms.vac.db'
  2. Vacuum the database with: 'sqlite3 sms.vac.db vacuum'
  3. Examine the difference between the vacuumed file and the original file: 'diff sms.db sms.vac.db'
There are obvious shortcomings with such a method.  The foremost problem is that the data is unstructured, and this causes interpretation difficulties.  However, there is no other method of which I know that will produce structured data.  And unstructured data can still be useful in an investigation, if only to verify a statement or corroborate another piece of data.

I am aware of one attempt at forensic recovery of deleted sqlite records.  It is specific to the Firefox browser history.  For more information, take a look here.

Calculating Embedded OS X Times

I recently examined a Macintosh computer where I needed to look at Internet History.  The only installed browser was Safari, and the history was stored in /Users//Library/Safari/History.plist, an XML file with visit dates in recorded in epoch format.  An example of that time is "314335349.7". 


The tricky thing is realizing that not all so called "epoch" time is the same.  In a 'nix system, epoch time is defined as the number of seconds since 01/01/1970 00:00:00.  However, the Mac epoch time is defined as the number of seconds since 01/01/2001 00:00:00.  EDIT: Mac time is also known as "Mac Absolute Time."


Unix epoch time is a simple conversion in Linux.  A current time is a ten digit number that resembles "1298307237".  To convert that to human-readable date, simply:
$ date -d @1298307237
Mon Feb 21 08:53:57 PST 2011 
The date command defaults to calculating from 1970 and not 2001 as we need for our Mac time conversion.  To obtain a proper conversion, we need to tell the date command the starting point of the data calculation thusly:
$ date -d "2001-01-01 314335349.7 sec PST"
Sat Dec 18 03:22:29 PST 2010
Knowing the format, scripting the conversion should be relatively trivial.  I hope this helps someone.  I'll know I'll be back to this page often to remind myself of the conversion syntax!

EDIT:  When processing Mac databases, like those found on the iPhone, it is possible to convert the times using SQLite commands.  I determined the number of seconds since unixepoch time to Mac Absolute Time with "SELECT strftime('%s', '2001-01-01 00:00:00');" as 978307200 seconds.  This value can be added to Mac Absolute Time and then converted to local time with the SQLite datetime() function thusly: datetime(time_field + 978307200, 'unixepoch', 'localtime').

Wednesday, February 2, 2011

Parsing the iPhone SMS Database

I was asked recently to help recover deleted messages from an iPhone SMS database.  Conveniently, this is called "sms.db" on the iPhone and it is located in the /mobile/Library/SMS/ directory.  It is a sqlite3 file type, and there are several GUI tools to read sqlite3 databases, as well as a sqlite3 command line shell.

It seems like a pretty straight forward exercise at first blush.  However, "How do I recover deleted messages from an iPhone database" is not as simple as question as it first seems.  When a user chooses to delete a message on the iPhone, the record is flagged as deleted in the database, but the record is not deleted from the database.  This means that records flagged as deleted are still recoverable with sqlite3 tools.

But when the phone is synced with iTunes, the records flagged "deleted" are actually removed (deleted) from the database itself and no longer recoverable with sqlite3 tools.  However, the data is still within the database, but the space it occupies is added to a "free list" for use by new data.  In other words, the data can still be recovered before it is overwritten by new data or the database is "vacuumed", a sqlite3 process that rebuilds the database removing all the free space and reducing the size of the database.  As a caveat, sqlite3 can be configured to overwrite records immediately upon deletion, but this is not the case for the iPhone at present.

So really, there are two types of deleted data to be sought: records flagged as deleted (not really deleted records at all), and records deleted from the database.  I'll discuss the first type in the remainder of this discussion.  I'll consider ways to recover deleted records in another post.

The sms.db has the following tables:
_SqliteDatabaseProperties, msg_group, group_member, msg_pieces, message

The message table contains the text messages of interest.  The contents of the table can be displayed from the shell with:
$ sqlite3 -header sms.db "select * from message"
ROWID|address|date|text|flags|replace|svc_center|group_id|association_id|height|UIFlags|version|subject|country|headers|recipients|read
1|+17132619725|1281033415|Hey, what's up?|2|0||1|0|0|4|0||us|||1
...
As you can see the interesting fields (revealed because the -header argument was used) are "ROWID," "address" (which is the source phone number), "date" (in unix epoch format), "text," and of less obvious value--"flags."  Flags indicates the type of message, i.e.,:
  • 2 = received
  • 3 = sent
  • 33 = Message send failure
  • 129 = deleted
  • (source: Adam Crosby)
With a little sqlite razzle dazzle, we can get a well formatted output in a more human readable form:
$ sqlite3 -header sms.db "select ROWID as row, case flags when 2 then 'rcvd' when 3 then 'sent' when 33 then 'fail' when 129 then '*del' else 'unkn' end as type, address as phone_no, datetime(date,'unixepoch','localtime') as date, text as message from message"
row|type|phone_no|date|text
1|rcvd|+17132619725|2010-08-05 11:36:55|Hey, what's up?
...
SQL syntax can be a bit tricky, and look a bit intimidating.  But by using internal commands, you can get a tremendous speed boost over using external text tools.  By way of explanation, I'll break down the command:

  1. sqlite3 -header sms.db  #open the database with sqlite and display column names 
  2. "select  #display the following columns from the table
  3. ROWID as row,  #display the ROWID column first, but rename it "row" in the output
  4. case flags when 2 then 'rcvd' when 3 then 'sent' when 33 then 'fail' when 129 then '*del' else 'unkn' end as type,  #display the flags column next, but change the "2" flag to "rcvd", "3" to "sent", etc., and rename the row "type"
  5. address as phone_no,  #display the address column renamed as "phone_no"
  6. datetime(date,'unixepoch','localtime') as date,  #convert the content of the date field to local time and display the column as "date"
  7. text as message  #display the text column renamed as "message"
  8. from message"  #all the columns of data are read from the message table
The command can be simplified if column renaming isn't required.  I included it here to make the output as clear as possible, and since the command can be incorporated into a script, it need only be typed once.  The quoted part of the command could be inserted into a GUI sqlite browser if that is your tool of preference.  The query can be adjusted to show just deleted messages, for example, by appending "where type like '*del'".

Notice I did not link the data in the message table to any other tables in the database.  While this can be done, my task was to seek out deleted messages.  And as I said earlier, I will explore methods for recovering deleted records from sqlite databases in a future post.

As always I welcome any comments or suggestions....