|Index||Last Week||Contact DQ News||Latest Issue|
|Featured Articles||Data & Information||Science & Technology||Education|
Monitoring Space Debris - Better Information Needed
An article in the August issue of Scientific American tracks the process of tracking space debris. The Space Age is just 40 years old. Since 1958 rockets have lifted more than 20,000 metric tons of material into space. Today 4,500 tons remain in the form of nearly 10,000 objects, only 5% of which are functioning spacecraft. These are the objects that military radars and telescopes can track. Ten million or more range in mass from one millimeter to 10 centimeters - capable of seriously damaging or destroying a functioning satellite or manned space station - and cannot be detected by present technology. The Earth and spacecraft are also bombarded by micrometeorites and particles of comet debris - also undetectable.
The space debris also includes billions of other sub-millimeter to micron size particles. These include paint chips and solid-rocket exhaust. Unfortunately, determining the location and amount of centimeter-size space debris is difficult. The U.S. Defense Department is developing new space surveillance methods to detect debris in the one to 10 centimeter range. The risk to functioning satellites and manned space vehicles is considerable - typically space debris may impact a satellite at 15 kilometers per second. This is about 20 times the speed of a high velocity rifle bullet fired on Earth. The "terminal effect" on a spacecraft may be both kinetic and electromagnetic - at very high velocities impact energy is transformed into plasma energy.
The amount of space debris would be much greater were it not for natural orbital decay of objects in low orbit (up to 1,500 km) where there is still enough air resistance to force satellites to eventually enter the Earth's atmosphere and burn up. The article was written by NASA scientist Nicholas Johnson and appears on page 62.
Y2K Problem - Major Drain on U.S. Economy?
According to a front page report in the August 2nd issue of The Washington Post, fixing the Y2K problem will pose a much greater challenge to the U.S. economy than previously suspected. According to the Post, almost every business and government agency around the globe has a a team dedicated to the crucial task of repairing computer systems so they will work in the year 2000. With about 500 days before the new century begins, many of the world's large corporations have yanked hundreds of workers off their regular jobs, hired thousands of technical consultants, and earmarked many millions of dolars for new electronic equipment.
There are three types of computer systems that are causing the most concern. Programs written for mainframe computers are proving to be difficult to inspect and repair. Mainframe programs may have been written decades ago. Many lack data dictionaries and are maintenance nightmares. Another major problem is "embedded chips" - microprocessors and logic chips found in industrial equipment. Only two percent of the equipment the devices control is not Y2K "compliant" but one industrial plant may contain thousands of different embedded systems. Software written for small business systems is the third major problem. Most small business owners aren't familiar with software, other than as users. Their computer systems may be a decade old and fixing non-compliant software may cost more than a new computer and software.
The Post article also explores the effect of non-compliant software on major sectors of the U.S. economy - and concludes that whether a company or organization makes it through January 2000 unscathed by the Y2K bug will depend on the smoth functioning of copmputer systems at myriad other organizations, such as those that provide financial services, electric power, phone service, and package delivery. On a positive note, consumers should not experience problems with applicances and motor vehicles. And users of personal computers that haveWindows 95/98 or Apple OS operating systems installed will experience few, or no, problems. The article is one of three in a series written by Post staff writer Rajiv Chandrasekaran. Other articles in the series appeared in the Post on August 3rd and August 4th.
Police Departments Pressured to Alter Crime Data
A front page report in the August 3rd issue of The New York Times relates the concerns of senior police officials across the United States about the pressure that the recent sharp drop in crime has placed on them to show ever-decreasing crime statistics. The Times reports a recent series of incidents where commanders have manipulated crime data to significantly reduce the reported crime in areas under their command.
Among the police departments where crime data were underreported or manipulated are New York, Philadelphia, Atlanta and Boca Raton. (Although the most serious data quality problems occurred in Philadelphia, it appears that many of them were the result of carelessness or stupidity. For years Philadelphia police officers logged crimes months after the crimes occurred.) The chief of the Boca Raton police department was forced to resign and commanders in other large police departments have been demoted.
One problem with the present system of collecting crime statistics is that the FBI focuses too much on eight major crimes: murder, rape, robbery, and aggravated assault along with the property crimes of burglary, theft, vehicle theft, and arson. A common thread running through the incidences of police officers altering crime statistics has been that police commanders have downgraded felonies like aggravated assault and burglary, which are reported to the FBI, to misdemeanors like vandalism that are not reported. This is what recently occurred in Boca Raton, where a police captain routinely downgraded felony reports to misdemeanor reports, thus making the community appear much safer than it really was in the national crime statistics. The felony rate was really 11% more than the rate reported to the FBI.
The push to reduce crime even further makes promotions and pay raises increasingly dependent on data that shows an overall reduction in crime. After New York City was able to significantly reduce major crime, there was a change in "mindset" among police departments from virtually ignoring crime statistics to viewing statistics as a valuable tool to reduce crime. But, according to the Times, the change in police departments to obtaining and using high quality data - much as good accounting data is used by corporate executives - has not been easy. One big step has been that several large police departments (Philadelphia among them) have begun to focus on data quality and statistical quality assurance. The article was written by Fox Butterfield.
Simple Theorem Finds 'Fake' Data?
An article in The New York Times's "Science Times" section on August 4th discusses a mathematical theorem known as Benford's Law, named for the late Dr. Frank Benford, a physicist at the General Electric Company. Dr Benford noticed that pages of a book of logarithms corresponding to numbers starting with the numeral 1 were much dirtier and more worn than other pages. Dr. Benford concluded that it was unlikely that physicists and engineers had some special preference for logarithms starting with 1. [Editors Note: It depends on the numbers they were using.] So Dr. Benford embarked on a mathematical analysis of 20,229 sets of numbers from wildly disparate sources. In all cases the number 1 turned up as the first digit about 30% of the time, more often than the numbers 2,3,4,5,6,7,8, and 9. (Apparently, decimals with leading zeros, like 0.2, don't count.)
Dr. Benford derived a formula to explain this. If absolute certainty is defined as 1 and absolute impossibility a 0, then the probability of any number "d" being the first digit is log10 (1 + 1/d). According to the Times, this formula "predicts the frequencies of numbers found in many categories of statistics." The article uses numbers appearing on the front pages of newspapers, U.S. county populations from the 1990 census, and the Dow Jones Industrial Average from 1990-93.
"Benford's Law" is used to check data generated by humans for "randomness." For example, income tax returns can be analyzed to determine whether a taxpayer filed a fraudulent tax return. (Carelessness or bad record keeping could produce the same result).
Unfortunately, "Benford's Law" appears to ignore number theory, economics, and history. Most newspaper dates are recent, and start with "19--." Prices of many common goods in the United States (a gallon of gasoline, a quart of milk, a loaf of bread) cost just over $1.00. Price indexes, bid and asked bond prices, short-term rates of return, and other economic and financial numbers are typically a few "points" over 1.00. The article was written by Malcolm Browne and appears on page C4.
Networks to Launch a Nielsen Ratings Rival
The four major television networks have long complained about the way Nielsen Media Research Inc. counts TV viewers. Now, according to The Wall Street Journal, they are planning to spend more than $60 million to launch a rival ratings service. CBS, NBC, ABC, and Fox last week signed letters of intent to back a new ratings service nicknamed Smart, by Statistical Research, Inc., of Westfield, New Jersey. Statistical Research executives said a number of big advertisers and cable companies are expected to join the venture soon. They expect a total launching budget of $100 million.
Though the ability to offer new ratings data is still two years away, the networks' move nevertheless marks a serious challenge to Nielsen's monopoly - and to the 50 -year-old ratings system that determines how billions of dollars in TV ad revenues are spent. According to the Journal, the big broadcast networks, which each pay Nielsen more than $10 million per year for its data, have grown increasingly critical of Nielsen's data quality. They claim that Nielsen significantly undercounts large segments of the population, particularly young people and viewers watching TV outside the home.
But critics of the new service say that it's no coincidence that a Nielsen rival is being launched at a time when network television viewership is continuing to decline. The move raises questions about whether the new service is simply an attempt to count the audience differently so that broadcast networks won't look so bad. It's also unclear for now whether enough advertisers will trust the new ratings, given that the data will be backed by the same networks selling the commercials.
While Statistical Research is launching its rating service, Nielsen plans to spend tens of millions of dollars to upgrade its own service. Both corporations plan to use advanced hardware and software in anticipation of the networks' switch to HDTV. The article was written by Journal staff reporter Kyle Pope and appears on page B1.