Sunday, April 30, 2006

The Wire from JSI Telecom

The Hi-tech battle of the bands.
HBO The Wire plagiarism :)
Date: 11/2/2005

David Peter - +/-
Gilles Roger - Cool!
Jason Leach - Rulez!
Jim Owens - Go Sens Go!
Kevin Lawrence - Sucks - without rock guitar :)
Stephane Pareilleux

Thursday, April 20, 2006

Wash Up, Doc?

The next time that you're sitting in your doctor's office, ask yourself this question: Has she washed her hands since examining her last patient? Better yet, ask your doctor that question: The odds are less than one in three that she has.

How about the harried waitress who plucks your morning toast out of the toaster at the diner, or the prep cook who makes your midday Caesar's salad? According to the U.S. Food and Drug Administration, as many as 1,400 people die each year as a result of food poisoning that can be traced directly to poor restaurant hygiene. The chances that you might get sick are frighteningly higher—roughly one in fifteen Americans in any given year will spend a miserable day or night that a little soap and water could have prevented.

From the outset, employees have been concerned that the infrared badge technology can reveal private and potentially embarrassing information, particularly about time spent in the bathroom. That concern seems almost quaint today, given the fact that infrared badge technology is being integrated into systems that are specifically designed to monitor employee hygiene habits.

You might think that the bathroom is the last bastion of privacy, a scrutiny-free zone that even employers can't invade. That may once have been true, but the potential for ruinously expensive litigation is pushing employers to overcome even the most basic concerns. Even if no one actually dies, a wave of food poisoning can ruin a restaurant; likewise, a malpractice suit resulting from the death of a patient due to poor hygiene can cost a hospital tens of millions of dollars. When numbers like that are tossed around, employers have a hard time justifying a continued respect for employee privacy.

In the spring of 1997, a New Jersey-based company, Net/Tech International, Inc., introduced the Hygiene Guard system for use in food service and health care facilities. The system uses sensors in employee restrooms, including sensors on soap dispensers and faucets, to make sure that employees engage in proper hygiene. The first system was installed at the Tropicana Casino and Resort in Atlantic City, with others soon following at George-town University Hospital and the Willam Beaumont Army Medical Center in El Paso, Texas.

Like other infrared sensor systems, the restroom sensors are tied into a central network that maintains a log of each employee's adherence to proper hygiene procedures. If an employee leaves the bathroom without washing up, an entry is made in the employee's log on the main computer. The Hygiene Guard system can also be programmed to cause an employee's badge to start flashing if he fails to wash up properly. Data from the various logs can be sorted and printed out in a variety of ways, and preformatted reports are available that can be reviewed with specific employees or posted on an employee bulletin board.

A variant on the Hygiene Guard system is produced by a Weymouth, Massachusetts, company called UltraClenz, which manufactures the Pro-Giene system. UltraClenz works with employers to establish an appropriate hand-washing schedule, and then issues each employee an infrared badge that beeps and flashes each time the employee is supposed to wash his hands. When the employee goes into the restroom, sensors in the sink faucet, soap dispenser, and towel dispenser record whether the employee has actually used them. It's not a system designed to make employees feel particularly dignified:

  • The system instructs the employee on each step in sequence by both voice capacity as well as a LCD read out. The time interval for each function is determined by you and consists of the following functions; wetting hands, applying soap, lathering hands for a full twenty seconds, rinsing hands, and drying with a towel. The individual is recognized as having completed a protocol hand wash that is recorded and time stamped.
  • Pro-Giene will also, on a real-time basis, recognize employees who have or have not successfully completed the procedure or who may be overdue for their scheduled wash.

Given the limited attention that Congress has paid to employee privacy rights in general, it's not particularly surprising that there is no "Freedom from Employer Bathroom Monitoring Act." As we've seen, employer monitoring of our most basic communication—speech—is permitted when it is in the "ordinary course of business." While courts have shown some sympathy for the idea that certain spaces, even on business property, are off-limits to surveillance (e.g., bathrooms and locker rooms), it's hard to argue that the prevention of contamination and illness are not part of the "ordinary course of business" in a hospital or restaurant.

Infrared Badges at Work

By 1997, nurses in over 200 hospitals were wearing infrared badges. The manufacturer, Executone Information Systems, called its version the Infostar Infrared Locator System, and described it as "an infrared-based, wireless locating system to help healthcare staff quickly find people and equipment."

More recently, the lead in infrared badge technology has been assumed by Versus Technology, based in Traverse City, Michigan. Versus has been particularly aggressive in integrating telephone systems with the badge technology. Its PhoneVision, Versus says, makes it possible for callers to "'see' (through the telephone) the location of the person [they] are trying to reach":
By wearing a lightweight badge which emits infrared signals containing location data, the individual's location is instantly available. The location information is received by the system and is accessible to users through the telephone. By simply entering the person's extension, PhoneVision identifies the exact location of a person at that point in time. Once a person is located, the user may choose to ring the nearest extension, hear a list of others at that location, or automatically be forwarded to voice mail.

The chief benefit of its infrared badge system, Versus claims, is the ability to locate staff members quickly, improving staff efficiency and enhancing patient care. In addition to locating specific individuals, the Versus system can also be configured to display the badges of various groups in different colors, so that administrators can see at a glance the locations, for instance, of all the nurses or all the cleaning staff.

In addition to locating individuals, the Versus PhoneVision system is also designed to locate equipment. Any piece of equipment can be rigged with an infrared tag containing a unique code:
Simply dial into the system, enter the equipment's ID number, and PhoneVision will automatically identify its location. If desired, you may also choose to hear who is with the equipment or hear what other equipment is at that location.

There's some concern on the part of employees, and nurses in particular, that systems like PhoneVision will increase the tendency of management to look at them as merely bipedal pieces of equipment. Certainly, an infrared badge increases the granularity of the data available to an employer, i.e., the level of detail about each employee's activities during the course of the day. Employers argue that the additional information will help them evaluate internal processes to make them more efficient, and that the system will also help reduce ambient noise (since employees can be located quickly without having to be paged). Nonetheless, employees are concerned that the accumulated infrared badge data will be another tool to help employers demand additional work or deny salary increases.

The Active Badge System

The development of the infrared LED in the early 1960s has given rise to a huge number of applications, the most familiar of which is the remote control, and in 1989, researchers at the Cambridge University Computer Laboratory in Cambridge, England, began work on a system that incorporated infrared LEDs into employee identification badges. After roughly four years of work, their research resulted in the development of the "Active Badge."

The basic concept of the Active Badge is straightforward. Employees are given a special identification card equipped with an infrared LED that sends out a unique code every fifteen seconds or so. If the card is within six meters of an infrared sensor (mounted on a wall or ceiling), the code is read by the sensor. The sensor is connected to a network of other sensors, all of which are linked to a central station. The central station periodically retrieves data from each of the sensors and uses the information to compile a map of each badge's current location.

As careful readers have already noted, the most obvious limitation of the Active Badge system is that it tracks badges and not people; it only tracks people if they actually wear or carry the badges, and more specifically, the badges that have been assigned to them. In the view of the designers of the Active Badge, the fact that you can take the badge off is one of the system's advantages:

There will always be some days when for whatever reason somebody does not wish to be located. This is easy to solve because the system tracks badges and not people. Anybody in this situation can easily remove their badge and leave it on a desk. The Active Badge system will now be fooled into concluding that person is somewhere where they are not. This kind of escape mechanism is not an undesirable system feature and may be an important factor in making this system acceptable for common use.

Technically speaking, when an Active Badge is put in a drawer or an employee's pocket, it slowly goes to sleep (as a power-saving measure). The sensor network will continue to display the badge's last known location, but the likelihood of finding the badge at that location (displayed as a probability on the Active Badge information screen) will steadily decrease.

The question for employees is how well such disappearances from the sensor grid will be tolerated by their employers. In a work environment where there is a strong management expectation that employees will wear their Active Badges, periodically taking off the badge and "disappearing" from the sensor system will undoubtedly be perceived as negative behavior. Companies can (and sometime do) impose a requirement that employees wear an Active Badge at all times, but with a technology that immediately raises so many privacy hackles ("Let's see, George, it says here that you spent a total of two hours yesterday in the second floor restroom—are you feeling ok?"), employers are taking a more persuasive approach.

The chief benefit that employers offer in exchange for wearing the Active Badge is a more efficient workplace. The Active Badge system makes it easier to receive phone calls while moving around a building and makes it easier to locate coworkers. Call-routing, of course, is only one of the Active Badge's capabilities. The Active Badge system was designed with the following commands:
  • WITH—a list of the other badges in the same area as the target badge
  • LOOK—a list of badges currently located in a particular area
  • NOTIFY—an alarm that goes off when a particular badge is picked up by the sensor system. (NOTIFY was designed to make it possible to deliver an urgent message to someone who had been out of the building and had just returned. It could be easily modified to sound an alarm when a particular badge enters an area where it is not authorized.)
  • HISTORY—a log of the badge's location over a period of time
When the Active Badge was first developed, the period of history recorded was limited to a single hour, and the information was stored in dynamic memory, not archived to permanent storage. Back in 1992, storage was still quite expensive: roughly $4 per megabyte. By the summer of 2002, you could buy a 160-gigabyte hard drive for $229.98. One hundred sixty gigabytes will hold a lot of Active Badge data.

The Heat Is On

Each morning when I go into the kitchen to turn on the teakettle, I pick up a small plastic box and press a bright orange button in the upper left-hand corner. Fifteen feet across the room, a shelf-top stereo system powers up and Vermont Public Radio begins providing me with the day's news and weather. The same technology is familiar to anyone who has purchased a television in the last twenty years—the infrared remote control, a device now widely blamed for robbing us of the eighteen calories per hour we would burn by actually getting up off the couch to change television channels.

Infrared remote controls, first introduced in the early 1980s, were a big improvement over earlier designs, which used wires physically connected to the television's dials, pulses of light, or ultrasonic signals. (The early designs all had serious drawbacks. Wire remotes were slow and sometimes made people trip; photovoltaic remotes don't work well in sunlight; and ultrasonic remotes often make dogs howl.) Today's infrared remote control devices make use of a discovery that occurred 200 years ago, when Sir Frederick Herschel used a prism to split light into its component colors and measured the temperature of each color. He observed that the temperature increased as he progressed through violet, blue, green, yellow, orange, and red, and that the very highest temperatures were just beyond the red section of the spectrum. He concluded that there were invisible rays beyond red that behaved like visible light. Herschel coined the phrase "calorific rays" for the invisible beams; they later became known by their current name, infrared rays.

Despite our inability to see infrared rays directly, Herschel's discovery has proven to be immensely valuable. Infrared cameras pointed out into space allow us to peer through interstellar dust clouds. Other cameras pointed earthward use the infrared portion of the spectrum to monitor the environment, track weather around the globe, and even discover centuries-old footpaths and prehistoric settlements. On Earth, thermal imaging cameras are used in a wide variety of applications, including the maintenance of mechanical systems, the testing of personal computer circuit boards, search and rescue efforts, and medical diagnosis.

Magstripe Cards

Currently, the most widely available and heavily implemented technology for tracking employee movement is the same familiar magnetic strip (or "magstripe") found on the back of the country's more than 1.4 billion credit cards. Magnetic strip technology, which has been around since the early 1970s, is now commonly integrated into employee IDs.

The typical magstripe is a thin strip of plastic film containing thousands of small (1/20-millionths of an inch) magnetic particles. Using a magnetic field, the particles in various sections of a magnetic strip can be oriented to the North or South Pole. Once information has been recorded on the strip, it can be deciphered by a magstripe reader.

Typically, an employer will issue IDs that encode certain information on the ID magstripe, such as an employee's name, ID number, security level, and so forth. Depending on the level of security in place at the company, the employee will have to swipe her ID through a magstripe reader in order to gain access to the parking lot, the front door, and/or various internal door-ways. The magstripe readers are typically wired into a network, so that when an employee swipes her card, the information in the strip can be verified by a central database. In addition, most such systems are specifically designed to record the date, time, and identity of each person who goes through a business's various checkpoints.

One drawback to a card reader system is that information about employee movement is collected only when the employee swipes his card. That limits the amount of information and level of detail that an employer can collect. An employer could set up a system that required employees to swipe their cards to go in or out of every door, but doing so has obvious practical difficulties, including cost and inconvenience. In addition, the hassle of constantly swiping an ID card would undoubtedly spur an employee rebellion. Magstripe cards have a number of other drawbacks. The physical process of swiping a magnetic strip tends to wear it out, which means that the strip eventually needs to be replaced. Exposure to a magnetic field can scramble or erase the data. And the structure for storage of data is well known, and the raw materials and software necessary to produce magstripe cards are readily available (thanks largely to the fact that the same technology is used on credit cards, which are a lucrative target), so they are all too easy to duplicate

The main concern for employers who use card readers (magstripe or otherwise) to monitor access and movement is the phenomenon of "tailgating," when one employee swipes his card and other employees pass through without swiping theirs. Some businesses have gone so far as to make "tailgating" a forbidden practice, and there's a growing industry of companies that market devices specifically designed to prevent tailgating. For instance, Designed Security, Inc., in Bastrop, Texas, offers a product called the ES520 Tailgate Detection System, which sounds an audible alarm or sends an alarm signal to a guard station if more than one person tries to pass the system on a single card swipe.

A challenge facing any employer who installs a security system is that employees often won't use it or will try to get around it. But security continues to be a critical issue—employers want the ability to know where their employers are and where they've been. As a result, employers are showing strong interest in tracking systems that require a minimum of employee participation. Until RFID technology becomes widespread, the leading candidate for more effective employee tracking is the incorporation of infrared technology into employee IDs.

Employer Bugs and Wiretaps

One significant aspect of the 1968 Omnibus Crime Control legislation was that Congress excluded switchboards and other types of business equipment from the definition of "interception devices." The practical effect of that exclusion was that for another twenty years, businesses were able to continue their nearly century-long practice of listening in on workplace conversations without fear of violating the new federal wiretap laws.

In the last fifteen years, there have been some efforts-most notably the Electronic Communications Privacy Act-to restrict the amount of eavesdropping that employers can do to their employees. Although the Act continues to favor employers, the threat of both civil and criminal liability has probably cut back on the amount of corporate eavesdropping that occurs.

Not surprisingly, however, there are no reliable figures on how many employers in this country are using hidden bugs and secret wiretaps to listen to their employees and their customers. In most states, it is illegal to record a conversation without the consent of all participants. The Granite Island Group, a Boston-based technical surveillance counter measures firm, offers a list of the signs that you may be bugged:
  • People seem to know your activities when they shouldn't.
  • Your AM/FM radio has suddenly developed strange interference.
  • Electrical wall plates appear to have been moved slightly or "jarred."
  • The smoke detector, clock, lamp, or exit sign in your office or home looks slightly crooked, has a small hole in the surface, or has a quasi-reflective surface.
  • Certain types of items have "just appeared" in your office or home, but nobody seems to know how they got there. Examples include clocks, exit signs, sprinkler heads, radios, picture frames, and lamps.
  • You notice small pieces of ceiling tiles or "grit" on the floor or on the surface area of your desk.

Fading Telephone Privacy at Work

Over the last century, improvements in technology have dramatically changed our expectation of privacy when making a telephone call. In the early days of the American telephone system, private phone calls were virtually nonexistent. It was cheaper for the phone companies to install and operate shared lines, which in turn made them less expensive for consumers; as late as 1950, 75 percent of all of the phone lines in the United States were party lines, shared by as few as two families or as many as twenty-four.

To listen in on your neighbor's phone call, all you had to do was pick up your receiver. This was often regarded as a "feature" rather than a drawback; long before the phone companies introduced three-way and conference-calling technology, party lines enabled a number of neighbors to share the local gossip. The inherent appeal of party line technology was demonstrated in the 1980s, when phone companies introduced multiperson chat lines. The forerunners of today's Internet chat rooms and IRC channels, the party lines were best known for making it possible for teens to run up sometimes phenomenal phone bills.

In the latter half of the twentieth century, our privacy expectations regarding phone calls changed. The installation of advanced switching technology made it possible to dial numbers directly anywhere in the country without the assistance of an operator, who might be tempted to listen in. In addition, as the cost of telephone lines and equipment steadily dropped, the number of single-user lines increased, and consumers proved increasingly willing to pay for them. Over the course of a generation, we came to expect that a telephone conversation was as private as a face-to-face chat in our living room.

Our privacy expectations regarding phone calls have been reinforced by the actions of the Supreme Court and Congress. After more than forty years of decisions upholding wiretapping because it did not involve a physical invasion of space, in 1967 the Supreme Court reversed itself in Katz v. United States, holding that the constitutional protection from search and seizure protects people, not places. If we make a telephone call, the Court said, under circumstances that indicate a reasonable expectation of privacy, then government agents cannot intercept it without a warrant. Congress's Omnibus Crime Control and Safe Streets Act of 1968 was more of a mixed bag from a privacy point of view: While it did permit governmental agents to conduct wiretaps for the first time since the passage of the Communications Act in 1934, it also imposed strict requirements on the issuance of wiretap orders.

To a large degree, we have extended our expectation of privacy for phone calls to the workplace. For example, when we pick up the phone to make a call, we assume that no one is secretly listening in on an extension. In fact, one recent privacy poll found that 81 percent of us believe that employers have no right to monitor our phone calls at work.

But employers do have a right to monitor our phone calls, so long as the monitoring is within "the ordinary course of business," which is why we so often hear the phrase, "This call may be monitored to ensure quality service" or some similar variation. Your employer can also monitor your phone calls when you give either explicit or implied consent. However, if your employer determines that you are making a personal call, he or she is supposed to stop any monitoring. As some commentators have pointed out, however, that loophole can give an employer 2–3 minutes of lawful eavesdropping. And not surprisingly, there is unequivocal evidence that some employers do not hang up at all.

In its 2001 annual survey of workplace monitoring and surveillance, the American Management Association estimated that 12 percent of the major U.S. corporations periodically record and review telephone calls, while 8 percent store and review voice mail messages. A far higher percentage (43 percent) monitor the amount of time that employees spend on the telephone, and check the phone numbers that have been called. Employers are motivated primarily by the impact excess phone calls can have on productivity, but also by concerns over the quality of customer service, possible loss of trade secrets, and security issues.

Tracking the phone numbers that an employee calls can be as simple as reading the monthly phone bill; a slightly more aggressive step involves installing a pen register, which records every number dialed from a particular phone. However, as computers and phones become increasingly integrated, more and more employers will be able to use PCs and software to track employee phone usage and produce detailed reports of all telephone activity.

According to Telemate.Net, one manufacturer of telephone monitoring software, over 20 percent of all workplace calls are personal. The company sells a software product called Telemate Call Accounting that a company can use to track all of the data generated by the company's telecom resources. The software allows management to identify "the calls and call patterns placed by individuals, teams, departments, and the organization." This software produces reports that:
  • Identify call volume, topics, destinations, sources, length, frequency and peak calling times.
  • Track account activity and build a marketing prospect/customer database.
  • Classify phone numbers to identify potential productivity distractions.
  • Integrates electronic calling card and DISA usage data to detect access code theft or fraud.
  • Identify inbound callers to spot abuse or incorrect routing of 800 calls.
Employers are particularly interested in such software because it helps them avoid concerns about the improper interception of employee telephone calls under the Omnibus Crime Control Act and the Electronic Communications Privacy Act; all the software does is analyze patterns of phone usage.

Is Your Cubicle (Donut) Bugged?

When we think of wiretapping, our minds leap naturally to the image of a trench-coated FBI or CIA agent, hunched with earphones over a jumble of wires through which the bad guy's voice can be heard with startling clarity. There's some truth to that image (or at least there was), but the reality is far broader. While government agents obviously do conduct wiretaps, their efforts at surveillance are more closely monitored and subject to far greater restrictions than businesses, who have been able to listen in on the conversations of their employees with near-impunity for decades.

Watching What You Say and What You Do in the Workplace

Few rights are as deeply treasured by American citizens as their freedom of speech. The deceptively simple guarantee of the First Amendment—"Congress shall make no law abridging the freedom of speech ..."—is deeply ingrained into our national psyche. If you want to stand on a street corner and describe loudly and in great detail how your elected officials are a bunch of idiots, then you have the right to do so. And even if you prefer not to spend your lunch hour criticizing the government, you may be one of the millions who enjoys listening to late-night comedians take potshots at the nation's politicians or to the vigorous give-and-take (or demented ravings, depending on your point of view) of talk radio.

You might be surprised to find, then, that freedom of speech doesn't mean the same thing at work that it means on a street corner or on late-night TV. If you would like to stand up in the middle of the company cafeteria and describe loudly and in great detail how the managers and directors of your company are a bunch of idiots, you technically have the right to do so, but no matter how vigorously you wave the Bill of Rights, it won't do much to protect your job prospects.

Thanks to various federal and state laws, you do have some protection if you criticize your boss on the phone or in a private conversation—as a general rule, eavesdropping on private conversations is not permitted. But as we'll see, you have far less protection when you conduct conversations via e-mail, or post comments on Web newsgroups, bulletin boards, or chat rooms.

Even more disturbingly, you may no longer be able to assume that you are free from casual observation while you are at work; a large number of different technologies are being used to track where you go and what you do while you're in the workplace. Infrared technology, for instance, is increasingly used to track employee movements. And as the size and cost of cameras steadily shrink, the frequency of video surveillance by employers is steadily increasing. Only a few states have passed laws regarding surreptitious videotaping, and virtually all of them contain exceptions that allow employers to conduct video surveillance for business-related reasons, a phrase that is usually broadly defined.

Classifications and Clearances

World War II, and the Cold War that followed, led NATO governments to move to a common protective marking scheme for labelling the sensitivity of documents. Classifications are labels, which run upward from Unclassified through Confidential, Secret, and Top Secret. The details change from time to time. The original idea was that information whose compromise could cost lives was marked ‘Secret’ while information whose compromise could cost many lives was ‘Top Secret’. Government employees have clearances depending on the care with which they’ve been vetted; in the United States, for example, a ‘Secret’ clearance involves checking FBI fingerprint files, while ‘Top Secret’ also involves background checks for the previous 5 to 15 years’ employment.

The access control policy was simple: an official could read a document only if his clearance was at least as high as the document’s classification. So an official cleared to ‘Top Secret’ could read a ‘Secret’ document, but not vice versa. The effect is that information may only flow upward, from Confidential to Secret to Top Secret, but it may never flow downward unless an authorized person takes a deliberate decision to declassify it.

There are also document-handling rules; thus, a ‘Confidential’ document might be kept in a locked filing cabinet in an ordinary government office, while higher levels may require safes of an approved type, guarded rooms with control over photocopiers, and so on. (The NSA security manual gives a summary of the procedures used with ‘Top Secret’ intelligence data.)

The system rapidly became more complicated. The damage criteria for classifying documents were expanded from possible military consequences to economic harm and even political embarrassment. Britain has an extra level, ‘Restricted’, between ‘Unclassified’ and ‘Confidential’; the United States had this, too, but abolished it after the Freedom of Information Act was passed. America now has two more specific markings: ‘For Official Use only’ (FOUO) refers to unclassified data that can’t be released under the Freedom of Information Act (FOIA), while ‘Unclassified but Sensitive’ includes FOUO plus material that might be released in response to a FOIA request. In Britain, restricted information is in practice shared freely, but marking everything ‘Restricted’ allows journalists and others involved in leaks to be prosecuted under Official Secrets law. (Its other main practical effect is that an unclassified U.S. document sent across the Atlantic automatically becomes ‘Restricted’ in Britain, and then ‘Confidential’ when shipped back to the United States. American military system builders complain that the U.K. policy breaks the U.S. classification scheme!)

There is also a system of codewords whereby information, especially at Secret and above, can be further restricted. For example, information that might contain intelligence sources or methods—such as the identities of agents or decrypts of foreign government traffic—is typically classified ‘Top Secret Special Compartmented Intelligence,’ or TS/SCI, which means that so-called need-to-know restrictions are imposed as well, with one or more codewords attached to a file. Some of the codewords relate to a particular military operation or intelligence source, and are available only to a group of named users. To read a document, a user must have all the codewords that are attached to it. A classification label, plus a set of codewords, makes up a security category or (if there’s at least one codeword) a compartment, which is a set of records with the same access control policy.

There are also descriptors, caveats, and IDO markings. Descriptors are words such as ‘Management’, ‘Budget’, and ‘Appointments’: they do not invoke any special handling requirements, so we can deal with a file marked ‘Confidential—Management’ as if it were simply marked ‘Confidential’. Caveats are warnings, such as “U.K. Eyes Only,” or the U.S. equivalent, ‘NOFORN’; there are also International Defense Organization (IDO) markings such as ‘NATO’. The lack of obvious differences between codewords, descriptors, caveats, and IDO marking is one of the factors that can make the system confusing.

The final generic comment about access control doctrine is that allowing upward-only flow of information also models what happens in wiretapping. In the old days, tapping someone’s telephone meant adding a physical wire at the exchange; nowadays, it’s all done in the telephone exchange software, and the effect is somewhat like making the target calls into conference calls with an extra participant. The usual security requirement is that the target of investigation should not know he is being wiretapped, so the third party should be silent—and its very presence must remain unknown to the target. For example, now that wiretaps are usually implemented as silent conference calls, care has to be taken to ensure that the charge for the conference call facility goes to the wiretapper, not to the target. Wiretapping requires an information flow policy in which the ‘High’ principal can see ‘Low’ data, but a ‘Low’ principal can’t tell whether ‘High’ is reading any data, and if so what.

The Bell-LaPadula Security Policy Model

The best-known example of a security policy model was proposed by David Bell and Len LaPadula in 1973, in response to U.S. Air Force concerns over the security of time-sharing mainframe systems. By the early 1970s, people had realized that the protection offered by many commercial operating systems was poor, and was not getting any better. As soon as one operating system bug was fixed, some other vulnerability would be discovered. There was the constant worry that even unskilled users would discover loopholes, and use them opportunistically; there was also a keen and growing awareness of the threat from malicious code. There was a serious scare when it was discovered that the Pentagon’s World Wide Military Command and Control System was vulnerable to Trojan Horse attacks; this had the effect of restricting its use to people with a ‘Top Secret’ clearance, which was inconvenient. Finally, academic and industrial researchers were coming up with some interesting new ideas on protection, which we’ll discuss below.

A study by James Anderson led the U.S. government to conclude that a secure system should do one or two things well; and that these protection properties should be enforced by mechanisms that were simple enough to verify and that would change only rarely. It introduced the concept of a reference monitor, a component of the operating system that would mediate access control decisions and be small enough to be subject to analysis and tests, the completeness of which could be assured. In modern parlance, such components—together with their associated operating procedures—make up the Trusted Computing Base (TCB). More formally, the TCB is defined as the set of components (hardware, software, human, etc.) whose correct functioning is sufficient to ensure that the security policy is enforced, or, more vividly, whose failure could cause a breach of the security policy. The Anderson report’s goal was to make the security policy simple enough for the TCB to be amenable to careful verification.

Wednesday, April 19, 2006

Technical Advisor position in JSI Telecom VA

$50,000 - $70,000/Year

Details: JSI Telecom is one of North America's largest developers of court-ordered or lawfully authorized electronic communications intercept solutions. In business since 1979, JSI is a strong, stable company with a wealth of experience and knowledge our customer

ISS World Spring 2006

VoiceBox Unified Collection Management

JSI Telecom will present an overview of its state-of-the-art collection management solutions for the law enforcement and intelligence communities, including cell site mapping and analysis tools and link charting capabilities.

Louise Goforth, JSI Telecom

When all else fails, police can turn to telephone taps

You can learn a lot about people by tapping their telephones, but police use that tool only after other methods have failed.

Police prefer to cozy up to drug dealers with undercover detectives and informants, buy some drugs and then swoop down with a warrant and grab whatever other evidence they can find.

Telephone taps take a lot more work, police and prosecutors said. Federal and state laws require police to justify the need to tap a phone, and convince a judge that the tap will produce evidence of a crime.

Federal phone tap law (Title 18 USC 2518) also requires police to explain what other means they’ve tried and why other methods won’t work.

“When we intercept a citizen’s phone . . . we’re under an obligation to really do our homework, really make sure that intercept is proper,” Assistant U.S. Attorney Mark Irish said.

To get permission for the phone taps in Operation Beachhead, state police filed lengthy affidavits outlining their suspicions regarding four men, and listing all known associates with whom they might talk. In addition to relying on surveillance and information from informants, police look at previous call records for the target telephone, to predict who might turn up on tape.

“When you do a federal wiretap, you are obligated to put in any possible expected interceptee (in the affidavit),” Irish said, adding later, “You’re putting as much information as you can to show why there’s probable cause to intercept this phone.

“We’re required to identify people . . . whom we may just have a hunch on,” Irish said.

To gather those records, police get authorization from a judge for a “pen register,” a device that records all incoming and outgoing phone calls from a particular phone line. Once installed, police can link up with the telephone company operating the line in question, and download the information over the Internet, state trooper Andrew Annicelli testified.

The information includes names and addresses associated with the numbers of any incoming and outgoing calls, and the time and duration of each call, he said. Police use additional software to run queries, and sort and print out information on who is communicating with the target phone and how often.

Police compare the list of callers and calls to their intelligence files. Anyone with a criminal record or suspected involvement can become an “expected interceptee.”

Police identified 61 “expected interceptees” in their first Operation Beachhead telephone tap, which began Feb. 22, 2001. Police tapped three phones: Michael Gingras’ cellular phone, Jim Grisson’s home phone, and Michael Dignam’s home phone.

The first warrant was good for a month, but police got it extended twice, and added a cellular phone used by Alfred Nickerson in the third warrant, issued April 27, 2001.

Once installed, the phone taps began to drive the investigation, state police Sgt. Robert Quinn testified during Nelson Santana’s trial.

“We had a surveillance team that was in place that was a reactionary team,” Quinn testified. “They were there daily, prepared to move in a certain location as directed by somebody working in the wire room.”

Police don’t like to talk about where the “wire room” is located, but state police Sgt. Michael Hambrook described how it works in testimony during Santana’s trial.

The room holds 12 computers linked in a local area network with a package of software and equipment called “Voice Box,” made by JSI Telecom, a company that specializes in wiretap gear. The system can handle numerous wiretaps at once, Hambrook said.

Tapping into a home telephone involves putting a device on the actual phone line somewhere between the house and a central switching office, Hambrook said. Cellular phone signals are intercepted by installing a computer card in the phone company’s switching equipment, he testified. Either device diverts the signal from the target phone, so that in addition to traveling its usual route through the phone company’s network, it also is routed to the wire room.

“His calls will be our calls. We program that card to send those calls, send that data, divert it to our collection equipment,” Hambrook said.

Police receive the audio signals from all incoming and outgoing calls. In addition to recording conversations, the system records the date and time of each call.

The system can record the number of any outgoing call, because it records the tones when a number is dialed. Numbers of incoming calls are recorded only if the tapped phone has caller identification, however, Hambrook said.

“So if he doesn’t get it (the number), you won’t get it. But if he does, we will,” he said.

The audio signal from any phone conversation goes directly into one of the 12 computers, where it’s stored temporarily on the hard drive, then immediately and automatically transferred to magneto-optical disks - two for each call, so there’s a backup copy - when the call is completed.

“It happens within seconds of the hang-up,” Hambrook said.

The system’s software assigns a case number for each target phone, and then numbers each intercepted call chronologically, Hambrook said.

Each call also is recorded with a digital header and trailer, with the digital recording sandwiched in between, Hambrook said. In addition to allowing police to access a specific call, the header and trailer contain algorithmic code developed by and known only to JSI Telecom, he said. Police can play back a call, but the digital recording can’t be edited or altered in any way without knowing that code, Hambrook said.

“If someone had it in their mind to actually tamper with that call, and manipulate it in some way, add a word . . . delete the call, any of that, they would actually have to get into our wiring, which is alarmed,” he said.

They would then have to hack into the software, know the case number and call number in question, load audio editing software, break the JSI Telecom code, edit or delete the call, then “put it back so that its exact bit length matches what was there before, get their software out and get out of the room without us seeing that,” Hambrook testified.

“It’s not impossible, but it’s close to impossible,” he said.

Police have officers monitoring the equipment, to listen to any calls that come in. They also can copy the recorded calls onto CDs, so they can later be played back during a person’s trial.

Combined with surveillance, the recorded calls can provide damning evidence. Still, Hambrook agreed it’s not the best way to make a bust.

“If we can get the target or gather evidence enough to prosecute the target of the investigation any other way, then we’re required to do that by law,” Hambrook said. “And to be honest with you, it’s a lot easier if we do that, because a wiretap is a lot of work, it’s quite costly and it’s a big job.”

Original Source


Request to authorize the City Manager to enter into an agreement with JSI Telecom for the purchase of an annual maintenance plan and software and equipment upgrades for the Police Department. The total cost will be $40,765. This request is made by the Deputy Finance Director and the Commander of the Drug Enforcement Bureau.

The Voicebox III System was provided to Phoenix as transferred equipment from the Pima County Sheriff's Office. The equipment requires an annual maintenance agreement at a cost of $13,025. Additional costs for software and equipment upgrade for work efficiency are requested at a cost of $27,740. JSI Telecom is the manufacturer and sole provider of proprietary software and integrated equipment.

Funds are available in the Police Department's budget for this purchase. Funds up to $25,067 are available through the Hints Grant. The remaining $15,698 in funds are available through the Drug Enforcement Bureau.

The subject firm is eligible to do business with the City of Phoenix until October 26, 2006, by its compliance with the affirmative action requirements of the City Code, Chapter 18, Article IV or V. The firm is responsible for maintaining its eligibility during the life of the contract and failure to do so may result in termination of the contract.

This item is also recommended by Mr. Washington.

Original Source

DEA Awards to JSI

Description: Telephone Intercept Equipment also known as Dialed Number Recorders (DNRs). This contract will provide the Drug Enforcement Administration's (DEA) Telecommunications Intercept and Audio/Video Support Unit of the Investigative Technology Section with a variety of DNRs. The Contractor shall provide dial number recorders, supporting hardware and other accessories, as well as help line support services, training, upgrades, enhancements and options throughout the life of this contract. During the life of the contract, the contractor shall keep pace technologically with the state-of-the art telecommunications advances.

Contract Number: JSI Telecom, Gaithersburg, Maryland / DEA-97-C-0053
Comverse Government Systems, Reston, Virginia / DEA-97-C-0054
Bartlett Technologies Corporation, Hollywood, Florida / DEA-97-C-0055
RACOM, Cleveland, Ohio / DEA-97-C-0056
Recall Technologies, Inc., Palm Bay, Florida / DEA-97-C-0057
Voice Identification, Inc., Somerville, New Jersey / DEA-97-C-0058

Award Date: September 1997
Estimated Award Amount: $25 million dollar ceiling for each contract
Contract Type: Fixed Price, Indefinite Delivery/Indefinite Quantity
Period of Performance: Base Period with Four (4) twelve-month Option Periods
Contracting Officer: Ms. Michele T. Allen, 202-307-0430
Contract Specialist: Mr. Jeffrey A. Saylor, 202-307-4366

Original Source

The Whole Earth Guide to Wiretaps

Wiretaps have proven international narcotics conspiracies and broken terrorist rings. They have convicted murderers and saved many lives. Wiretaps deliver evidence unobtainable by any other means. Yet as a society we view them with a certain queasy suspicion. We regulate wiretaps heavily, and endlessly debate the public policies that govern them. Wiretaps are the world's most intrusive investigative tool.

Contrary to the impression conveyed by Hollywood and the media, wiretaps are fairly rare. The vast majority of police officers and prosecutors go through their entire careers without ever working with one. Unless you or a close friend or relative are involved in a very serious criminal conspiracy, it is highly unlikely that you will ever be eavesdropped upon by the police.

Eavesdropping from your friends and neighbors, however, is another proposition altogether. Just walk into the neighborhood spy shop and see what's flying off the shelves. This ingenuity dates back to the early days of the telephone network. See, for instance, the 1918 case of State v. Beringer 10 Ariz. 502 (Ariz. 1918), in which an enterprising Arizonan placed a "dictograph" over the transom of a hotel room in order to spy on phone conversations. The Arizona Supreme Court found that conduct "most reprehensible," but, unfortunately, it was not in violation of an earlier state statute against tapping telegraphs. This was a classic case of a defendant getting off on a high-tech technicality. Obscure semantic distinctions are still very much a part of wiretap law today.

The basic rule defining eavesdropping is: Thou Shalt Not Intercept If Thou Art Not a Party. A "party" is a person speaking or being spoken to, or the sender or receiver of a fax or email. This means that you may not hook up a hidden tape recorder to your ex's phone, even if you yourself are paying for the telephone service. Nor are you to clone your coworker's voice mailbox. You shall not use a data sniffer to grab other people's keystrokes on the fly.

If a party consents to the interception, then it's not an "interception," it's just the recording of a message legitimately passed to a person meant to receive it. The federal rule and the rule of many states is "one party consent," though some states require consent from all parties. A legitimately recorded phone call is not eavesdropping. Police do not require a court order to listen to this evidence.

Police do have to get a court order permitting interception without consent. Only certain prosecutors specified by federal or state law can apply for these court orders, and only certain courts may grant them.

Police must satisfy the court that they have all the probable cause necessary for a search warrant, and that they have exhausted other less intrusive means of investigation without success, or that it would be too dangerous to try them. Wiretap paperwork is detailed, technical, and in a complex case can easily run fifty to a hundred pages. Communications obtained in violation of the statutes cannot be introduced in court in any proceeding--except if the defendant has engaged in some illegal eavesdropping, in which case this can become legal evidence against him. (For this reason, your secret tape recorder will be of no use in your divorce, and may cause you even greater grief than your ex did.)

A Lifesaving Wiretap in Action

In Arizona, Ira Evans was on trial for shooting up a houseful of women and children. His first trial ended in a mistrial--the star witness was to testify, but at dawn that morning one of Ira's pals fire-bombed her house. Somehow, she and her children escaped. She then went into a witness protection program.

Still in jail, with his trial beginning again and unable to get to the witness, Ira decided to kill the prosecutor. (Hint: this is seldom a truly good idea.) The cops were tipped off by the girlfriend of one of Ira's accomplices, and an emergency wiretap began. (These are cases in which the phone company and police scramble to get the equipment up and working, while the legal team has forty-eight hours to get the court order.) Technical problems often ensue between the phone company and the police, but luckily, within two hours of the "go" decision, the wiretap was up. I say "luckily" because the very first call intercepted was a "Class 1" call.

Police divide their wiretapped calls into types. Class I is a call directly relevant to the crime under investigation. Class 2 is a new crime, which generally means going back to court for an amended wiretap order. Class 3 would be useless junk such as a hang-up call.

Even with automated modern equipment, these judgement calls can be onerous. Is the call incoming, outbound, or three-wayed? Who are all the parties? lust staying on top of the action can burn out a wiretap listener (known as a "monitor") in a matter of minutes. Profanity, broken police pencils, and frenzied calls for a backup "relief monitor" are customary signs of a really "active" wiretap.

Ira's case was a particular monster because he was calling collect from his jail, then three-waying through an accomplice's phone that was equipped with call-waiting. Nevertheless, his very first recorded call had him plotting with an accomplice to kill his prosecutor. However, Ira Evans's right to a fair trial still had to be protected.

Therefore the team built a "Chinese wall" to keep the wiretap information from tainting Ira's trial for assault. Though the prosecutor knew nothing about results from the wiretap, he suddenly found himself with police protection. Police were able to video Ira's hit men as they entered the county building to search out the prosecutor's office.

A week's worth of Ira's wiretapping revealed not just the murder plot, but a perjury scheme for a new alibi, and (rather unexpectedly) a massive, multi-player, financial fraud under way.

Wiretaps do have a certain humor to them. Drug dealers trying for creative codes can find themselves later explaining to a jury why they wanted to buy "half a shirt" or "three tires." One genius was so frustrated at his supplier's denseness about "opera" and "symphony" that he finally yelled at him, "Man, I want some cocaine!"

Ira Evans' murder trial hit a particular high point when one of his tapes was played to the fourteen members of the jury. Over the prison phone, he'd been coaching his alibi witness to commit perjury, so he had to explain the general setup inside a courtroom. "Well, there's the judge, then there's me and my lawyer, and there's the prosecutor, and then they bring in about a hundred m* * * f * * * g fools and they pick some." The jury found him guilty.

Hands-on with the Hardware

Whether they're analog or digital, modern wiretaps are designed to do specific things. They keep track of call detail data, including the numbers called and the length of calls. They log every "call event," even if only a hang-up. They create a record every time the monitor is listening or recording--and they make it impossible to listen without recording. Police wiretapping equipment is designed to prevent mistakes or illegal use. Monitors are required by law to minimize interception of innocent conversations, so it's critical to document times when no one is listening.

Some manufacturers record the audio as digital .way files, using encryption algorithms to ensure the integrity of each stored call. To gather evidence to support a wiretap, police may employ other related gizmos. A court-ordered "pen register" or "dialed number recorder" (DNR) can keep track of numbers dialed from the subject's phone, though it cannot overhear the conversations themselves. A "trap and trace" device can collect the numbers of the phones calling in.

These devices give police information about calling patterns, not the call content, but patterns of phone behavior are useful anyway. A software package long favored by police for these logging and analysis functions is "PenLink" ( In the hardware arena, JSI Telecom ( supplies DNRs, intercept devices, and various arcane accessories with names like "multi-line dial-up slave system" and "digital trunk interface." JSI's VoiceBox III[TM] is a rather sophisticated multimedia system that allows discovery copies to be delivered to the legal defense on fresh-burned CDs. Many officers favor Marantz tape recorders for analog recording. Recall Technologies in Palm Bay, Florida got a plum contract to supply DNRs to the DEA for the Colombian government. Its product supports up to six DNRs on a single personal computer, and they offer interception support as well (

Another source of surveillance equipment is Jarvis International Intelligence, Inc. ( This company runs an outfit called "The Academy" that trains law enforcement on such vital matters as "pair identification" (it's a serious blunder to tap the wrong phone line) and "listening post operations." Operations are truly critical, for if it's not done right, all the evidence gets suppressed and all that effort and expense goes down the drain.

Original Article by Gail Thackeray

The USA Patriot and Homeland Security Acts

The relationships the FBI is developing with businesses through InfraGard and the data mining capabilities inherent in a program like "Digital Storm" have taken on a particular significance in the wake of the 9/11 terrorist attacks.

In an action that mirrors its reaction to the turbulence of the 1960s, Congress recently adopted sweeping changes to the rules governing government wiretaps. The changes were included in the "Uniting and Strengthening America By Providing Appropriate Tools Required To Intercept and Obstruct Terrorism" Act, better known as the USA Patriot Act. Among its various provisions are a number of significant changes to how surveillance is conducted in this country:
  • Government Agents and the Foreign Intelligence Sueveillance Act. The Act permits government agents to use the Foreign Intelligence Surveillance Act (FISA) to intercept communications and engage in surveillance even if the primary purpose of the surveillance is a criminal investigation. The benefit to law enforcement is that the standards for obtaining authority to do surveillance under FISA are far less onerous than those applied to surveillance of U.S. citizens suspected of committing a crime.
  • Law Enforcement and Access to Websites. Although the parameters for doing so are still unclear, the Patriot Act apparently authorizes law enforcement to obtain access to a list of websites visited by an individual under investigation, as long as law enforcement agents can obtain a U.S. District Court order.

Most disturbingly for employees, the Patriot Act also gives the Federal Bureau of Investigation a virtually unfettered right to demand any records maintained by a business about an employee under investigation. Specifically, the law states:

The Director of the Federal Bureau of Investigation or a designee ... may make an application for an order requiring the production of any tangible things (including books, records, papers, documents, and other items) for an investigation to protect against international terrorism or clandestine intelligence activities....

If a federal judge or magistrate approves the government's application, an order is entered without any advance notice to the business or employee, and the business is forbidden from telling anyone that the FBI has even made a request for an employee's records. Not surprisingly, it's unclear how extensively this provision has been used over the past year, but it's clear that from both a legal and practical point of view, the FBI's ability to compile data about employees is steadily expanding.

Following the election in November 2002, the recapture of the Senate by the Republican Party helped spur passage of the Homeland Security Act on terms more acceptable to President George W. Bush. Among the more controversial provisions of the Act is the creation of a project called "Total Information Awareness" (TIA). The goal of TIA is to build a massive governmental database containing, among other things, every commercial, consumer, and financial transaction, every academic grade, and the title of every book or video rented or purchased in this country. It's unclear just yet how much information will be drawn from employers, but the potential scope of TIA is not encouraging.

InfraGard and the Coming "Digital Storm"

On February 26, 1998, using a $64 million appropriation from Congress, Attorney General Janet Reno and FBI Director Louis Freeh created a new multiagency group called the National Infrastructure Protection Center (NIPC, pronounced "nip-see"). According to NIPC's first director, Michael Vatis, the group was based at the FBI because of the need for the agency's investigative resources when an unauthorized intrusion is detected.

Later that spring, on May 22, 1998, President Clinton signed Presidential Decision Directive 63, which charged NIPC with the responsibility of assessing the potential for cyberthreats, conducting investigations, issuing warnings, and evaluating infrastructure vulnerabilities. As designed by Reno and Freeh, NIPC will employ more than 500 people around the country; Vatis told Wired magazine in the fall of 1998 that "[a]t least half of our staff will come from the Secret Service, National Security Agency, CIA, NASA, Department of Defense, state and local law enforcement, Department of Treasury, Department of Energy, and the Department of Transportation."

A central focus of NIPC has been to expand and build upon a program called InfraGard, which was developed by the Cleveland FBI office in the summer of 1996. On its website, the FBI describes InfraGard as follows:

InfraGard is a cooperative effort to exchange information between the business community, academic institutions, the FBI, and other government agencies to ensure the protection of the information infrastructure through the referral and dissemination of information regarding illegal intrusions, disruptions, and exploited vulnerabilities of information systems.

By the beginning of 2001, all fifty-six FBI field offices around the country were running InfraGard chapters, and more than 518 private businesses had signed up. In order to persuade companies to participate, NIPC provides them with a secure website on which information is posted and secure e-mail for exchanging information about intrusions and threats.

The FBI is steadily increasing its capability for gathering, storing, and cross-matching the detailed information it receives from the business community. As an extension of its work with NIPC, the FBI asked Congress in 2000 to appropriate $75 million to upgrade the Bureau's information technology. Under a program dubbed "Digital Storm," the FBI is planning to replace all of its analog wiretap equipment with digital intercepts, running off of specially modified PCs. As the FBI makes the transition to digital technology, it will gain the ability to do keyword searches on thousands of pages of wiretap transcripts; currently, agents must wade through lengthy audio tapes or hard-copy transcripts. The upgrade from analog to digital technology will also improve the FBI's data mining capabilities for the information contained in its myriad databases.

To Disclose or Not to Disclose


National Security Agency. An agency of the Federal Government. NSA is the Federal agency responsible for the design and use of nonmilitary encryption technology, developing sophisticated codes to scramble data, voice or video information. In short, it is charged with signals intelligence and is widely assumed to monitor all communications traffic (phone, fax, data, video, etc.) into and out of the United States with foreign countries. It is barred from intercepting domestic communications. NSA grabbed the headlines in 1993 and 1994 when it adopted its most visible attempt to outgun cybervillains with something called the Clipper Chip. The idea is that the Clipper Chip (a microprocessor) would be installed in every phone, computer, and personal digital assistant in America would carry a device identification number or electronic “key” — a family key and unit key unique to each Clipper Chip. The device key is split into two numbers that, when combined into what’s called a Law Enforcement Access Field number, can unscramble the encrypted messages. The device keys and the corresponding device numbers, according to NSA proposals, would be kept by the US government through key escrow agents. Under a plan proposed, the attorney general would deposit the two device keys in huge, separate electronic database vaults. One key would be held by the National Institute for Standards and Technology (NIST) and the other by the Automated Systems Division of the U.S. Treasury. Access to these keys would be limited to government officials with legal authorization to conduct a digital wiretap. When a law enforcement agency wants to tap into information encrypted by the Clipper Chip, they must obtain a court order and then apply to each of the escrow agents. The agents electronically send their key into to an electronic black box operated by the law enforcement agency. When these keys are electronically inserted, encrypted conversations stream into the black box and come as standard voice transmissions or as ASCII characters in the case of electronic mail. At least that’s the theory. American Industry resisted the Clipper Chip and NSA backed down, only to start pumping for something Fortezza. See NSA Line Eater. Since I wrote the above, NSA has sort of come part way of its secret shell. It’s now got its own Web site,, in which it describes itself thus: The National Security Agency (NSA) was established by Presidential directive in 1952 as a separately organized agency within the Department of Defense under the direction, authority, and control of the Secretary of Defense, who acts as Executive Agent of the U.S. government for the production of communications intelligence (COMINT) information. The Central Security Service (CSS) (which is part of NSA) was established by Presidential memorandum in 1972 in order to provide a more unified cryptologic organization within the Department of Defense. The Director, NSA, serves as chief of the CSS and exercises control over the signals intelligence activities of the military services. The resources of NSA/CSS are organized for the accomplishment of two national missions: The information systems security or INFOSEC mission provides leadership, products, and services to protect classified and unclassified national security systems against exploitation through interception, unauthorized access, or related technical intelligence threats. This mission also supports the Director, NSA, in fulfilling his responsibilities as Executive Agent for interagency operations security training. The foreign signals intelligence or SIGINT mission allows for an effective, unified organization and control of all the foreign signals collection and processing activities of the United States. NSA is authorized to produce SIGINT in accordance with objectives, requirements and priorities established by the Director of Central Intelligence with the advice of the National Foreign Intelligence Board. Executive Order 12333 of 4 December 1981 describes in more detail the responsibilities of the National Security Agency.

Indirect Tapping

A current in a conductor gives rise to a magnetic field around the conductor. When the current varies, the magnetic field changes. Conversely, if a conductor is immersed in a magnetic field, changes in the magnetic field will induce currents in the conductor. A coil of wire attached to a telephone or clamped to a telephone line can pick up conversations on the telephone. This type of wiretap, where there is no physical connection between the tap and the target line, is called Indirect Tapping.


Communications Assistance to Law Enforcement Act. Passed in 1994, CALEA is a U.S. law granting law enforcement agencies the ability to wiretap newer digital networks. The act also requires both wireline and wireless carriers to enable such wiretapping equipment.

Alternatives to Wiretaps

Electronic eavesdropping methods allow law enforcement officers to legally compromise privacy. Privacy activists argue that law enforcement already has many technologies available to them that can be used as alternatives to wiretaps. Alternatives not defeated by the use of encryption, include:
  • Improved call-tracing methods
  • Surveillance with infrared scanners
  • Aerial surveillance
  • Bugging
  • Filtering that picks certain voices or keywords out of the babble of telecommunications traffic, formerly precluded by the sheer volume of calls
  • Supersensitive satellite photography that lets the police peer into windows or identify a license plate from 20 miles up in the sky
  • Vast electronic databases [many combined]
  • Plaintext readers such as Tempest, which read text appearing on computer screens through closed doors and walls as we type
  • Laser light beams that allow conversations to be deduced from vibrations of the windowpane
  • Credit card transactions, e-mail, Internet transactions, and click-stream data are all easy to intercept or subject to other electronic surveillance methods.

Wiretap Act, 18 U.S.C. 2511

Section 2511 of the Wiretap Act prohibits interception and disclosure of wire, oral, or electronic communications. Complex elements of a 2511 violation include the intentional interception of any wire, oral, or electronic communication or the use of any electronic, mechanical, or other device to intercept any oral communication when the device is affixed to, or otherwise transmits a signal through, a wire, cable, or other like connection used in wire communication, or when the device transmits communications by radio or interferes with the transmission of wire communication, and intentionally discloses contents of any wire, oral, or electronic communication. It also covers the knowledge that the information was obtained through the interception of a wire, oral, or electronic communication.

It is further unlawful to use a pen register or a trap and trace device (without authority). It is unlawful to intercept wire or electronic communication that is scrambled, encrypted, or transmitted using modulation techniques, the essential parameters of which have been withheld from the public with the intention of preserving the privacy of communication, such as:
  • Radio portion of a cellular telephone communication, a cordless telephone communication that is transmitted between the cordless telephone handset and the base unit, a public land mobile radio service communication, or a paging service communication
  • Interception of a satellite transmission (encrypted or scrambled)
Punishment specified under this act includes a minimum of $500 for each violation and jail terms dependent on circumstances and damages.

Searching and Seizing Computers and Obtaining Electronic Evidence

Recognizing and Meeting Title III Concerns in Computer Investigations

Robert Strang, U.S.A. Bulletin, March 2001

The dramatic increase in crimes involving the Internet, and computer crimes more generally, is well documented. The "2000 CSI/FBI Computer Crime and Security Survey" documented that 90 percent of the 643 respondents (primarily large U.S. corporations and government agencies) detected computer security breaches within the last twelve months, totaling hundreds of millions of dollars in losses. In light of the increased criminal opportunities created by the ever-growing reliance on, and growing interconnectedness between network computers, there can be no doubt that experienced and sophisticated computer criminals pose a substantial challenge to law enforcement.

There has also been a corresponding increase in the difficulty in catching computer criminals. There are a number of reasons why this is so. The anonymity provided by computer communications has long been recognized as one of the major attractions to would-be computer criminal subjects. This difficulty has been heightened by the use and availability of so-called "anonymizers," services that repackage electronic mail and thereby diminish the ability to trace it. In addition, many victims and Internet service providers (ISPs) fail to record, or preserve for a sufficient length of time, historical logs and other records that might otherwise lead to the identification of subjects engaged in wrongdoing. Furthermore, the practice of jumping from compromised network to compromised network, including networks with servers located outside of the United States, can also make tracing the communications back to the initial subject extremely difficult. This is especially true where subjects have made efforts to cover their tracks or where proof of criminal activity, or even their fleeting presence, is lost before it can be secured. Finally, victims may be unaware of criminal activity on their network or, if aware, slow or unwilling to report it due to competitive reasons. For these and other reasons, there are many computer crimes where it will be impossible for law enforcement to identify the perpetrators involved. Therefore, exclusive reliance on historical investigations will allow criminal activity carried out by more experienced and skillful criminals to go undetected and/or unpunished.

Issues Raised by Proactive Investigations

As a result of these limitations, law enforcement is increasingly turning to proactive investigations where undercover agents seek out the individuals who are already engaging in computer crimes — attempting to record, in real-time, computer criminals while they are involved in the criminal act. The proactive approach bypasses some of the investigatory hurdles of anonymity, lack of records, and under-reporting inherent in computer cases. It also has the added benefit of potentially stopping the criminal before the damage is done. Use of real-time monitoring of criminal activity is even advantageous in some historical investigations where a subject returns to, or passes through the same victim's network. As criminals are increasingly adept at avoiding leaving an historic trail, such investigations are the next logical step for law enforcement (and one that is increasingly being taken).

Such undercover operations and recording are also feasible. The very expectation of anonymity that benefits criminals also helps law enforcement undercover agents enter this world without being scrutinized, as long as they can talk the talk. Agents can even use other undercover identities to vouch for themselves. From a technical perspective, so-called "sniffer" computer programs that are capable of recording all keystroke activity on a particular computer network are a well-known and widely available tool for system administrators, hackers, and law enforcement alike.

These types of investigatory techniques often raise legal issues. One of the major issues raised by real-time monitoring is compliance with federal wiretapping statutes. This chapter focuses on the ability to legally and contemporaneously record and identify subjects, and to develop admissible evidence which is central to a successful investigation. Agents and other investigators, some with only limited experience in this area may turn to prosecutors with questions regarding what they can and cannot do in their efforts to use real-time monitoring of criminals during the course of undercover operations. It is critical for prosecutors to be able to identify potential legal issues relating to such recordings by agents, in advance, before problems arise.

Because the current legal road map is largely without judicial markers, it is important to address some of the potential issues raised by the application of the privacy laws to real-time monitoring, as well as some of the statutory exceptions that may permit monitoring to take place absent a court order.

Application of Title III to "Electronic Communications"

In 1986, Congress passed the Electronic Communications Privacy Act ("ECPA"), which, among other things, extended the prohibitions contained in Title III of the Omnibus Crime and Control and Safe Streets Act of 1968 (the "Wiretap Act"), 18 U.S.C. §§ 2510-2521, to electronic communications that are intercepted contemporaneously with their transmission — that is, electronic communications that are in transit between machines and which contain no aural (human voice) component. Thus, communications involving computers, faxes, and pagers (other than "tone-only" pagers) all enjoy the broad protections provided by Title III unless one or more of the statutory exceptions to Title III applies. In the computer context, both the government and third parties are prohibited from installing "sniffer" computer software, such as the FBI's Carnivore program, to record keystroke and computer traffic of a specific target unless one of the exceptions is present.

Where the government is seeking to intercept and monitor all electronic communications originating from a target's home or through the e-mail account at the target's ISP, the application of Title III differs little from its historical application to telephone wiretaps. The issues agents and prosecutors are likely to encounter are typically technical, not legal. This is particularly true when law enforcement is dealing with ISPs who may have little or no experience in providing Title III assistance to law enforcement, have technical or manpower difficulties in providing access to the subject's accounts, or show an overall reluctance in working with law enforcement.

Sometimes, however, the potential effect of Title III's restrictions on computer law enforcement can be unexpected. For example, if a hacker breaks into a victim's computer, engages in criminal activity, and uses it to store credit card numbers, common sense would suggest the subject hacker enjoys no reasonable expectation of privacy. Perversely, however, the subject hacker's communications may enjoy statutory protection under Title III, and thus any interception of that illegal activity by a private party (including the victim) or law enforcement must fall within one of the statutory exceptions in order to monitor without a court order. In the above example, the victim's consent is likely to be sufficient to fall within one of Title III's statutory exceptions.

This example, however, becomes more difficult if the subject hacker simply uses the victim's computer as a jump point from which to illegally hop to new downstream victims or to communicate with the hacker's confederates, as is frequently the case. Does a victim have a right to monitor communications that are being made by a subject hacker who is trespassing on their computer, and is no longer seeking to damage it, but rather is passing through on his or her way to commit more mischief? Does the government enjoy the same rights to monitor that communication as the victim? How, if at all, does the analysis change when the government is the primary victim of the hacking activity?

The analysis of these scenarios is currently dependent on how courts interpret the breadth of existing statutory exceptions to Title III that were written to address the interception of simple, two-way telephone conversations. Thus, under current law, a hacker, a trespasser on another party's computer network, an intruder who enjoys no expectation of privacy, may nevertheless receive certain statutory protections under Title III. Prosecutors must therefore consider whether the statutory exceptions to Title III permit any proposed monitoring. The following are three statutory exceptions that appear to offer potential alternatives to the administrative and judicial burdens involved in seeking court-ordered monitoring under Title III.

Consent of a Party "Acting Under Color of Law"

The most commonly used exception to Title III's requirements permits "a person acting under color of law" to intercept an "electronic communication" where "such person is a party to the communication, or one of the parties to the communication has given prior consent to such interception." 18 U.S.C. § 2511(2)(c).

While there are not many judicial decisions in this area, two circuits appear to recognize that the owner of a computer may be considered a "party to the communication" and thus can consent to the government monitoring electronic communications between that computer and a hacker. See United States v. Mullins, 992 F.2d 1472, 1478 (9th Cir. 1993); United States v. Seidlitz, 589 F.2d 152, 158 (4th Cir. 1978). Thus, this exception appears to permit a victim to monitor and to authorize the government to monitor, hacking activity directly with his or her computer.

By contrast, if the communication merely passes through a victim's computer, a court may consider it a strain to conclude that the victim computer is a "party" to the communication. Technically, the victim's computer is receiving electronic communications and passing them on to downstream victims and/or confederates of the subject hacker. The literal possibility of monitoring this downstream traffic is present, as all the data streams through the victim's computer, but is the victim a "party to the communication" if the communications are simply passing through its system? A court may conclude that the owner is not a "party" capable of giving consent to key stroke monitoring given its pass-through role.

This is more than a metaphysical concern. Hackers regularly seek to pass through the computers of victims they have previously hacked to: (1) cover their trail when they arrive at their next victim or victims; (2) continue to make use of favorable features of a compromised network such as storage space, bandwidth, and processing speed; (3) return to hacking tools they have left there for safe-keeping; or (4) simply as a pattern of passing through old conquests to make sure their previous exploits have not been detected. This situation can arise even when a government computer is the initial victim. From there, the subject may hop (typically Telnet) to the next network without taking the trouble of backing out of the hacked system. It is possible that the downstream network may not even be a true victim, but rather may belong to a system friendly to the subject hacker. In any event, the statutory exception requires that this new victim give "prior consent" to the monitoring, which will be almost an impossibility in the short term where the victim or victims typically cannot be known in advance.

Consent of a Party "Not Acting Under Color of Law"

Title III also permits "a person not acting under color of law" to intercept an "electronic communication" where "such person is a party to the communication, or one of the parties to the communication has given prior consent to such interception." 18 U.S.C. § 2511(2)(d).

In addition to permitting a victim to monitor communications to which he or she is a party before law enforcement gets involved, this exception provides a very powerful tool to law enforcement: obtaining the implied consent of the subject hacker himself or herself through computer "banners."

Computer networks frequently make use of computer banners that appear whenever a person logs onto the network. Each of us, for example, passes through such a banner each day when we log onto the Department of Justice's computer network. A banner is nothing more than a program that is installed to appear whenever a user attempts to enter a network from a designated point of entry known as a "port." Banners vary substantially in wording, but they usually inform the user that: (1) the user is on a private network; and (2) by proceeding, the user is consenting to all forms of monitoring. Government networks already employ such broad-based banners, and we encourage private industry to follow suit. Businesses are often amenable to doing so, although often for non-law enforcement purposes, such as the monitoring of their employees' use of the Internet.

Thus, the subject hacker gives implied consent to monitoring whenever he or she passes through a properly worded banner. A properly worded banner should also result in implied consent by the subject hacker to the monitoring of all downstream activities, thus alleviating Title III concerns in much the same way as telephone monitoring of inmates, based on implied consent, has been upheld by the courts.

Due to their pervasiveness, the presence of banners is unlikely to deter or arouse suspicion in a subject who has already decided to enter a network illegally. In the case where a private network failed to have a sufficiently broad banner to permit monitoring, a later attempt to add a banner between visits may cause suspicion on the part of the hacker. Even in this situation, however, the very nature of the hacking experience frequently involves the constant cat and mouse game between network system administrators, seeking to remove hackers from their systems by terminating a compromised account or by "patching" the vulnerability that permitted the hackers to illegally enter the network, and the hackers attempting to return to the system and overcome and disable its security features. Thus, the addition of a new banner may not concern a dedicated hacker. The subject hacker may not be aware that Title III may prevent law enforcement from monitoring all of the intruder's activities while he or she is connected to the compromised computer network.

Finally, there are technical limitations to the use of banners. Computer systems are designed to have hundreds of ports for different types of uses such as electronic mail, remote log-in, or Telnet. Most of these ports are not in use and remain closed, and can only be opened by a system administrator, or by a hacker who has illegally obtained the same privileges as a system administrator. Due to the technical nature of these ports, which goes beyond the scope of this article, it is not possible to install a banner or other message on a certain percentage of the ports. It is possible for a determined hacker to gain the same privileges (known as "superuser" or "root" status) on a network and open one or more of these ports, perhaps to serve as a future "back door" means of entry. Having once been given notice that the subject has given implied consent to monitoring by making use of a network, however, that consent should be valid for future use whether entry was made through a bannered or a non-bannered port. The only question this possibility raises is whether an affiliated or unaffiliated hacker might use one of these non-bannered ports for entry, and never pass through a banner.

Protection of the Rights and Property of the Provider

Title III also permits providers of a communication service, including an electronic communication service, the right to intercept communications as a "necessary incident to the rendition of his service" or to protect "the rights or property of the provider of that service." 18 U.S.C. § 2511(2)(a)(i).

This exception permits a private party to monitor activities on its system to prevent misuse of the system through damage, fraud, or theft of services. Since computer hacking often involves damage or disabling of a network's computer security system, as well as theft of the network's service, this exception permits a system administrator to monitor the activities of a hacker while on the network.

This exception to Title III has some significant limitations. One important limitation is that the monitoring must be reasonably connected to the protection of the provider's service, and not as a pretext to engage in unrelated monitoring. While no court has explored what this limitation means in the computer context, by way of analogy, one court has held that a telephone company may not monitor all the conversations of a user of an illegal clone phone unrelated to the protection of its service. See McClelland v. McGrath, 31 F. Supp.2d 616 (N.D. Ill. 1998).

Furthermore, the right to monitor is justified by the right to protect one's own system from harm. An ISP, for example, may not be able to monitor the activities of one of its customers under this exception for allegedly engaging in hacking activities on other networks. This limitation also makes it harder for a network administrator to justify the monitoring of hacking activities of a subject who has jumped to a new downstream victim. This potential limitation is unfortunate as it becomes more applicable precisely when the consent of a "party to the communication" is also at its weakest.

Another important limitation of this exception is that it does not permit a private provider of the communication service to authorize the government to conduct the monitoring; the monitoring must be done by the provider itself. Thus, where a provider lacks the technical or financial resources, or desire to engage in monitoring itself, it may be difficult for the government to step in to assist. Similarly, in situations where the government becomes aware that an ISP or network system administrator is monitoring illegal activity in order to protect its "rights and property," the government should be careful not to direct or participate in the monitoring, or cause it to be continued, because the provider may be deemed an agent of the government, and the exception may not apply. Compare United States v. Pervaz, 118 F.3d 1 (1st Cir. 1997), with McClelland, infra.

Even with these limitations, the provider exception can be very useful, particularly when a system administrator aggressively chooses to investigate hacking activity, or when the victim computer network is owned by the government. The technical gap in the use of implied consent described above, the inability to place consent banners on certain ports, can be filled by the use of the provider exception to monitor computer intrusions coming through these ports.

While Title III concerns are only one of the potential issues raised by proactive investigations in the computer context (others may include entrapment or even third-party liability), they are certainly among the most important. When all else fails, the prosecutor can always seek a Title III interception order. While this requires both departmental and judicial approval, there are a few aspects of obtaining such a "datatap" order that may make it less of a burden than obtaining a traditional telephone wiretap order. First, with respect to the interception of electronic communications, law enforcement is not limited to predicate offenses, but rather may seek it for any federal felony (note that some forms of hacking may constitute only a misdemeanor). See 18 U.S.C. § 2516(3). Second, with respect to the recording on or through a victim computer, the actual hacking activities typically constitute a federal felony, thus meeting the probable cause standards for seeking the authorization will be simple. See 18 U.S.C. § 2518(3)(a).

Third, the method of recording the results of the datatap are not difficult; the information can be obtained using specialized software or commercially available sniffer programs. Finally, minimization presents far less of a problem than it does for the execution of a traditional wiretap. See 18 U.S.C. § 2518(5). The burdens encountered and time lost in seeking Title III authorization makes the proper use of the exceptions discussed in this article extremely useful tools in investigating criminal activity. With the aid of proper monitoring, as well as the use of the many tools to obtain historical activities of subject hackers, law enforcement can overcome the potential anonymity provided by a computer, and identify and prosecute those criminals who abuse it to violate the law.

Enforcing the Criminal Wiretap Statute

The Computer Crime and Intellectual Property Section helps to protect the privacy of Americans by enforcing the criminal wiretap statute, 18 U.S.C. § 2511. One well-publicized interception involved a conference call in which the Speaker of the House Newt Gingrich participated. The couple that intercepted the call pleaded guilty on April 25, 1997, as is described in the press release at

On September 8, 1998, a Sheriff in North Carolina pled guilty to wiretapping and recording a high school teacher's telephone calls, which the Sheriff intended to use to force the teacher out of his job. The information is available at

The FBI Wiretap Law

A hotly contested FBI proposal called the Wiretap Bill, or the Digital Telephony (DT) bill, finally passed the Congress unanimously on October 7, 1994. The bill mandates that all communications carriers must provide "wiretap-ready" equipment. The purpose of this is to facilitate the FBI’s implementation of any wiretaps that are approved by the courts. The bill was strongly opposed by Computer Professionals for Social Responsibility (CPSR), the Voters Telecomm Watch (VTM), the ACLU, and the Electronic Privacy Information Center (EPIC), among others, and much support for this opposition was marshalled in terms of letters and e-mail messages to congressional representatives.

CPSR sent out a list of "100 Reasons to Oppose the FBI Wiretap Bill"; for example, Reason 29 was that the bill contains inadequate privacy protection for private e-mail records. The estimated cost of enacting the law is (according to a CPSR report) $500 million, a cost that will be borne by "government, industry, and consumers." (This information came from their website, This is just another instance of an action of government that will infringe a right of the public (in this case, privacy) and require the public to pay for it.

Skype and JSI

Skype recently released a Security White Paper by Tom Berson of Anagram Laboratories that outlines exactly how Skype uses encryption. This paper may be found at the following URL:

Skype uses 256-bit AES encryption for its data stream. The use of encryption also makes it difficult to log the use of Skype, or more precisely, what is being used on Skype or who may be talking using Skype. The use of encryption works for the voice call, any file transfers, and instant messaging.

A potential issue for Skype and virtually any other VoIP messaging application is a set of new rules called the Communications Assistance for Law Enforcement Act (CALEA), which goes into effect in 2007. These rules make it easier for the government to have wiretap access to the media stream. The rules say that any VoIP protocol or company must be wiretap ready, and the list includes SkypeOut, Vonage, Packet 8, and many others. There are crossover worries for other groups offering Internet access, and the Federal Communications Commission (FCC) promises a set of regulations clarifying the first set of rules. The rules can be found at

We highly recommend that any people considering Skype for business use read the upcoming rules and regulations to better understand any impacts these regulations might have on their businesses.

Sunday, April 16, 2006

The Most Unprofessional Manager: Kevin Lawrence

Kevin Lawrence is the most unprofessional manager in JSI Telecom I had ever met.
I really want to know how this guy got to be a manager?

The very first day I started thinking of leaving. I was given an assignment and I realized very quickly that I was not going to receive any mentoring or support.

Of course, many managers are so busy or preoccupied that they wouldn't even notice if their employees walked around wearing sandwich boards saying, "Trying to Change Things!" or "Staying and Becoming Less Engaged Every Day!" — or whatever step in the disengagement process they happen to be on at the time.

Not that it's only the manager's responsibility to take the initiative in this process—employees also need to understand they have a singular responsibility to find ways of addressing their concerns and re-engaging themselves in the workplace.

But many managers are just too slow to observe the telltale signs of employee disengagement until it's too late to do anything about it.

The obvious early warning signs of disengagement are absenteeism, tardiness, or behavior that indicates withdrawal or increased negativity. It is also useful to know that these early signs of disengagement typically start showing up after a shocking or jarring event takes place that causes the employee to question his or her commitment.

My manager, Kevin Lawrence, was not a good mentor or coach. He was just coasting to CEO position, but he was moody and unprofessional. And then one day he yelled at me. I went to his manager about it, but he just excused his behavior, saying "that's just the way he is." That was the last straw for me."

I felt Kevin Lawrence didn't seek my input or recognize my contributions. Then, the work started becoming more administrative than technical. I felt like I was just shuffling papers and not designing anything. That's when I started looking elsewhere, and a coworker referred me to the company I now work for.

This guy needs to go.

Some quit and leave ... others quit and stay.