a quick poll--need some input

SixFive

bonswa
Forum Member
Mar 12, 2001
18,728
238
63
53
BG, KY, USA
Name: Anonymous
Submitted: 08.28.01
Flesch-Kincaid Score: 60.655347607191 ?
Word Count: 2098

"This site is hellacious and outstanding!!"

Soccer


My favorite recreational activity is soccer. I play soccer a lot and have been playing for five or six seasons. in a game not long ago I made a hat trick , or three goals in one game. We placed second in our league this year.

Their are lots of rules in soccer and they are all very important. If you don?t follow them you will pay the consequences. I?ll tell you about them in this paper.

Probably the most important rule is that you can?t touch the ball with your hands. If you do you will be penalized by the other team getting an indirect kick or a direct kick. The only time it will be a direct kick because of a hand ball will be when the hand ball is in the goalie box.


Another rule of soccer is that you can?t hit the other players or curse at them. If you do hit another player you will either get a yellow card or a red card depending on the severeness of the hit and if was an accident or not. A yellow card is a caution and a red card puts you out of the game.

Soccer is played al over the country and all over the world. It is a sport that is in the summer Olympics. The games will be held in Atlanta this year and teams from all over the world will be playing there. Hopefully we will get tickets to one of the games because I really want to see one.

Soccer is a very fun sport and is very good for me. I love it and will always play it. You should try it you have not already tried it.

Bicycling

Riding my bike is my favorite recreational activity. I have a haro race group 1zI. I ride every day and often ride to school. Their are many tipes of bikes for sale these days. My freinds all ride too. Sometimes we go all over town just for fun.

There are many kinds of tricks you can do on bikes. Some of them are very difficult and others are very hard and take a while to learn. I can only do a few simple ones.

One of the tricks is called an indo. This one I can do and quite well. It?s really pretty simple all you have to do is get going pretty slow and hit the front brakes. When you do this you back tire comes off the ground and goes up in the air. Some people that get really good can make the bike turn around.

Another trick is called a bunny hop. Most any one can do this trick. All you do is pull up your front tire and then push it down fast enough so that back tire is off the ground. For it to be a bunny hop both tires must be off the ground.

Probably the easiest trick of all is popping a wheelie. To perform this trick you must simply pull up your front tire. Some people can ride wheelies. This where you pull up your front tire and hold it up for a long time.

There are other tricks that are much harder like doing a flip or bunny hopping over a trash can. I can not do these tricks. Only professionals can perform these hard tricks.

I love riding my bike.

Agriculture

Although greatly reduced as a source of employment, agriculture has undergone a major transformation. Georgia agriculture is modern and mechanized, and the former strong economic dependence upon cotton has been replaced with a diversified agricultural economy based upon the production of soybeans, corn, peanuts, tobacco, poultry, cattle, and horticultural and orchard crops. Much of Georgia's crop production is concentrated on the Inner Coastal Plain.

The Piedmont, once an established farming region, is now characterized by farmers who operate small part-time cattle farms but who earn most of their income from employment in towns and cities. These farmers sell, buy, and trade cattle for a living.
Forest, Georgia's most common landscape component, covers about 65% of the state; forest area has increased by more than 10,000 sq km (3,860 sq mi) since the 1930s. Complexes of longleaf and slash pines cover most of the Coastal Plain, and loblolly and shortleaf pines forest the Piedmont. A forest of oak and pine is dominant on the upper Piedmont, changing to oak and hickory forest in the mountains. The declining acreage for cropland has allowed extensive forest regrowth of pine.

Peaces are a major part of Georgia?s agriculture. They are grown all over Georgia because of uor wonderful climate. This is also why we are called the peach state. In Byron their is an outlet called the Byron outlet. It has a huge peach beside it and it is on a tall pole. I have seen it many times and think about it when I pass a peach orchard.
Minerals

Kaolin, a fine-grained clay found in the central area of the state, is a major export of Georgia. Several clays in commercial use consist largely of kaolinite, a hydrated aluminum silicate. Large deposits of this mineral occur in China; central Europe; Cornwall, England; and several states of the United States. Various grades of kaolin clays may be distinguished. White kaolin clays are fine in particle size, soft, nonabrasive, and chemically inert over a wide pH range. Their largest consumer is the paper industry, which uses them as a coating to make PAPER smoother, whiter, and more printable, and as a filler to enhance opacity and ink receptivity. Ball clays are usually much darker because they contain more organic carbonaceous material. These fine-grained refractory bond clays have excellent plasticity and strength, and they fire to a light cream to white color. For these reasons, ball clays are used extensively in CERAMICS in whitewares, sanitary ware, and wall t!
ile, and as suspending agents in glazes and porcelain enamels. Fireclays are soft, plastic clays used primarily in making REFRACTORY MATERIALS that will withstand temperatures of 1,500 degrees C or more. The most common fireclays, underclays, occur directly under coal seams.

Another one of Georgia?s major minerals is granite. Granite is a light-colored plutonic rock found throughout the continental crust, most commonly in mountainous areas. It consists of coarse grains of QUARTZ (10-50%), potassium FELDSPAR, and sodium feldspar. These minerals make up more than 80% of the rock. Other common minerals include MICA (muscovite and biotite) and hornblende.
Industry

Manufacturing in the South was a minor economic activity until well after the Civil War. In the late 19th century, however, new attitudes toward economic development, a surplus of agricultural labor, and cotton and power resources lured textile manufacturers from New England to the Piedmont. Poor and landless white tenants have taken advantage of this employment opportunity, but the surplus black agricultural laborers have joined the steady migrant stream to the North.

In the post-World War II years Georgia has undergone an economic revolution. Textile manufacturing continues, particularly the carpet industry of northwest Georgia, and apparel manufacturing has become a leading Georgia industry, primarily located in the many small towns and cities of rural areas. Other industries include transportation equipment, (automobiles and aircraft), pulp and paper, food processing, and electrical machinery.

Another major industry is making school busses. There is a very large school bus factory. Hundreds of school busses are built there and many repairs are done. The factory is called the Blue Bird factory. This factory employes hundreds of people. Infact my mom works there sometimes a nurse. She sometimes has up to 40 pateince a day. The school busses they make there are the same ones kids all over the place ride to and from school in. Every morning I see these school busses pass by my house taking lots of litlle and big kids to school.

Those are two of Georgia?s major industries.
Etowah mounds


The Etowah Mounds are a series of prehistoric earthen burial mounds located in the southern Appalachian Mountains, near Cartersville, Ga. Forming part of a fortified village complex, they constitute one of the largest of the so-called MOUND BUILDER sites in the southeastern United States. The central tumulus-shaped mound is roughly 100 by 115 m (330 by 380 ft) at the base, tapering to 50 by 55 m (165 by 180 ft) on its flattened top. It is approximately 20 m (70 ft) high and originally supported a small structure, probably a temple.

Extensive excavations at the site from 1925 to 1928 yielded copper axes, engraved copper and shell objects, incised pottery, and other artifacts. Also found were stone sarcophagi, called stone-box graves, made of flat stone blocks in which the dead, along with stone figurines, were placed. The Etowah Mounds site was occupied from about AD 1200 to 1700. An assemblage of artifacts dated c.1300 is associated with the Southern Death Cult, so named because the designs on characteristic ritual objects suggest a preoccupation with violence and death. This cultural tradition appears to have spread to mound sites in northwestern Georgia from the eastern Oklahoma site of SPIRO MOUND.
Chickamauga

Chickamauga was a major battle of the American CIVIL WAR fought on Sept. 19-20, 1863. The Confederate army of 66,000 men under Gen. Braxton BRAGG attacked a 58,000-strong Union army under Gen. William S. ROSECRANS along Chickamauga Creek in northwestern Georgia. On the second day of battle the Confederates drove much of the Union army from the field in disorder. Only the stubborn stand of the Union left flank under Gen. George H. THOMAS saved Rosecrans's army from destruction. Bragg failed, however, to follow up his victory aggressively. This lessened its impact on the war and contributed to the Confederate defeat at Chattanooga in November.
Okefenokee

Okefenokee Swamp is located in southeastern Georgia and northeastern Florida. Covering more than 1,553 sq km (600 sq mi), the swamp is drained by the Suwannee and Saint Marys rivers. The swamp is still in a relatively primitive state and contains virgin pine forests, stands of black gum and cypress, and grassland. Wildlife includes alligators and other reptiles, deer, bears, and several hundred species of birds. In 1937 most of the swamp was made a national wildlife refuge.

Robins Air Force Base

The Robins Air Force Base, located near Warner Robins, is home to the Warner Robins Air LogisticsCenter, the 653rd Support Group, which performs the vital functions of running the huge baes, and the worldwide headquarters for the Air Force Reserve. The largest industrial complex in the state, Robins employes approximately 14,500 civilians and 4,400 military personel and contributes nearly $700 million anually through its payroll to the middle Georgia economy.
Robins is also home to the Museum of Aviation, which welcomes over 200,000 visitors annually to veiw aircraft, missile exhibits and films on aviation history.


Sam Nunn

Democrat Samuel Augustus Nunn, Jr., b. Perry, Ga., Sept. 8, 1938, U.S. senator from his home state since 1972, is chair of the powerful Senate Armed Services Committee. Nunn attended the Georgia Institute of Technology for three years (1956-59), served a one-year hitch in the Coast Guard, then received his undergraduate (1960) and law (1962) degrees from Emory University in Atlanta. During the next decade, while practicing law, he also served (1968-72) in the Georgia House of Representatives. Nunn readily admits that, like most of his Georgia constituents, he is a conservative Democrat. Always an advocate of strong defense and liberal defense spending, he supported President Ronald Reagan's military buildup in the early 1980s. During the Bush administration, he saved new high-tech weapons such as the Stealth bomber from defense cutbacks, but he also opposed the initial resolution authorizing Bush to use force to drive Iraq out of Kuwait. Nunn has also opposed President Bill C!
linton on such highly visible issues as cuts in defense spending and lifting the ban on gays in the military. Nunn remains extremely popular in Georgia, running unopposed in 1990.
This essay is only for research purposes. If used, be sure to cite it properly!
 

SixFive

bonswa
Forum Member
Mar 12, 2001
18,728
238
63
53
BG, KY, USA
Name: Anonymous
Submitted: 08.28.01
Flesch-Kincaid Score: 77.411772653576 ?
Word Count: 1639

"This site is hellacious and outstanding!!"

Soccer


Table of Contents

Introduction?????????????????????????..3

History of the Activity?????????????????????..4

Nature of the Activity?????????????????????..4

Playing Area?????????????????????????5

Physical Conditioning?????????????????????..6

Practice Drills????????????????????????..6

Conclusion?????????????????????????..7





Introduction


Soccer is the worlds most popular sport. It is the national sport of most European and Latin-American countries, and of many other nations. Millions of people in more than 140 countries play soccer. The World Cup is held every four years. Soccer is one of the most famous international sport. Soccer is known world wide and is played in the Olympics.
In a soccer game there are two teams of 11 players who try to score a point by kicking a ball into the opponents net. Soccer is played on a rectangular field with a net on each short side of the field. All players must hit the ball with their feet or body and only the goalie is allowed to touch the ball with his/her hands. There are many things you can do to condition yourself to play.
Soccer the way we play it came from England in the 1800?s. Soccer was not that popular until the mid-1900?s. Today soccer is very popular and it is one of the nations fastest-growing sports. There are many exercises and drills you can do to improve how you play soccer. There is also many physical conditioning that players can do. Soccer can help you stay fit and healthy. Many people can play soccer and benefit from it. Soccer is very fun and a great recreational sport.




History of the Activity

Games similar to soccer were played in China as early as 400 BC. In about 200 AD the Romans played a game in which two teams tried to score by advancing a ball across a line on the field. The Romans passed the ball to one another but they never kicked it. London children in about 1100 played a form of soccer in the streets. During the 1800?s the people of England played a game similar to soccer. Many rules changed and each person interpreted the rules differently. In 1848 a group of school representatives met at Trinity College in Cambridge and drew up the first of soccer rules. In 1863 English soccer clubs founded the Football Association. By the late 1800?s soccer began to spread to the rest of the world. The Canadian Soccer Association was established in 1912 while the United States Soccer Federation was set up in 1913. The first World Cup Championship was in Montevideo, Uruguay. Since then it has been played every four years except during WWII. During the 1970?s soccer grew to be a very popular spectator sport as well as participant sport.


Nature of the Activity

A soccer game begins with a kickoff in the center of the field. A coin is flipped to decide which team will kickoff. The other team kicks off at the start of the second half when the teams switch sides or nets. After a team scores the other team gets to kickoff to begin again. The kickoff takes place in the middle of the field. When the ball is kicked it must travel the circumference of the ball and touch another player before the kicker can touch the ball again.
After the ball is in play it remains in play unless it crosses a goal line or a touch line. All players attempt to stop the ball from coming in there zone while at the same time trying to score a goal. A player may kick the ball into the net with any part of the body except the hands and arms. If the ball goes out of bounds the play is restarted with a corner kick, a goal kick, or a throw-in. The referee decides what type to use. If the ball crosses the goal line and the defensive team touched it last then there is a corner kick by the offense. If the offense touches the ball last and crosses the goal line then it is a goal kick. A throw in happens when the ball crosses the touch line. When it crosses the touch line the team that did not touch it last throws the ball in bounds. The ball is thrown over their head with two hands. Fouls are called when a player does not obey the rules and acts unsportsmanlike. When a foul is called the opposite team receives a either a penalty kick, a direct free kick or and indirect free kick.


Physical Conditioning

There are many exercises that people can do to improve in soccer. Exercises that strengthen your legs and improve flexibility are ideal. Physical conditioning is important if you plan on being good at soccer. Here are five exercises that are ideal for soccer:
1. Running: running helps to improve cardiovascular fitness. In soccer there is lots of running for the ball so endurance and a speed is a must.
2. Leg Extension: using weights can help strengthen the legs. Using weights makes you kick harder and makes the ball travel farther, as a result you become a better player.
3. Leg Machines: exercising all muscles in the leg makes you kick harder and prevents injury when you are diving all over for the ball. The strong muscles help prevent injuries.
4. Stretching: stretching allows you to be more flexible. Sometimes soccer players need to kick the ball in the most awkward positions. Flexibly helps the player to kick the ball in those positions more effectively.
5. Weight Training: all around weight training makes a soccer player even better. A stronger body helps prevent injury and improve all around performance.

Practice Drills

Practice Drills help the soccer player be more skillful and a better player. There are many drills that can be done. Drills like dribbling to head butting are often used. Some of these drills include:
1. Practicing kicking the ball is a very important and often done drill. To practice the player will kick the ball into the net. Often there is a goalie that they try to score on. Kicking is the most important skill in soccer. Practicing will make your kick stronger and more controllable.
2. Passing is also a very important skill. One drill that can be done is to run side by side with another player and pass the ball back and forth. This skill will improve your passing and receiving skills. Passing is also vital in the game of soccer.
3. Heading is one of the only ways to legally hit the ball when it high in the air. With another player heading can be practiced. One player throws the ball high over top of the other player. The player then will jump up and hit the ball with his forehead and try to control the ball. Heading is very hard and often lots of practice is required.
4. Control of the ball is also very important. By setting up pylons in any order and distance and weaving through them in a pattern like formation can improve your control of the ball. Trying to go quick can also improve your speed of running while dribbling a ball.
5. One on one practices improve both your dribbling and tackling. With two players one is given the ball and must keep the ball away from the other player. While one player is improving his faking and dribbling the other is practicing his defense and tackling. When this drill is done often it can improve your offense as well as defense.




Conclusion

Soccer can be done in many age groups. Children often play the sport in school as early as elementary school. Many adults also play the sport. Seniors rarely play soccer because of the easiness it is for them to get injured. Soccer is often very demanding. Soccer for many kids can be very fun. Most children don?t think of soccer as work and often enjoy playing soccer. Adults also sometimes find soccer fun and even some adults have careers in the area as a professional soccer player.
Soccer is very valuable in obtaining "life long" fitness. Soccer can be a very demanding sport. Soccer can improve your cardiovascular fitness as well as strength and flexibility. All the physical conditioning and practice drills are very important in keeping fit. Soccer players are able to be healthy and strong because of the physical involvement.



This essay is only for research purposes. If used, be sure to cite it properly!
 

SixFive

bonswa
Forum Member
Mar 12, 2001
18,728
238
63
53
BG, KY, USA
Name: Anonymous
Submitted: 08.29.01
Flesch-Kincaid Score: 45.964958869581 ?
Word Count: 1628

"This site is hellacious and outstanding!!"

A Nuclear Reactor


The term Nuclear Reactor means an interaction between two or more Nuclei, Nuclear Particles, or Radiation, possibly causing transformation of the nuclear type; includes, for example, fission, capture, elastic container. Reactor means the core and its immediate container. Nuclear Reactors are used to produce electricity . The numbers of Nuclear Reactor plants have grown sufficiently . Electricity is being generated in a number of ways, it can be generated by using Thermal Power. It can be employed by using two basic systems a Steam Supply System and an Electricity Generating System these two systems are related to each other. The Steam Supply System produces steam from boiling water by the burning of coals and the Electricity Generating System produces electricity by steam turning turbines. The Nuclear power plants of this century depend on a particular type of Nuclear Reaction, Fission (The splitting of a heavy nucleus like the uranium atom to form two lighter "fission !
fragments" as well as less massive particles as the Neutrons). In the Nuclear Reactors this splitting is induced by the interaction of a neutron with a fissionable nucleus. Under suitable conditions, a "chain" reaction of fission in which events may be sustained. The energy released from the fission reactions provide heat, part of which is ultimately converted into electricity. In the present day Nuclear power plants, this heat is removed from the Nuclear fuel by water that is pumped past rods containing fuel. The basic feature of the nuclear reactor is the release of a large amount of energy from each fission event that occurs in the nuclear reactors core. On the average, a fission event releases about 200 million electron volts of energy. a typical chemical reaction, on the other hand releases about one electron volt. The difference, roughly a factor of 100 million electron volts. The complete fission of one pound of uranium would release roughly the same amount of energy as!
the combination of 6000 barrels of oil or 1000 tons of high quality oil. The reactor cooling fluid serves a dual purpose. Its most urgent function is to remove from the core the heat that results when the energy released from the Nuclear reactions is transformed by the collisions into the random nuclear motion. An associated function is to transfer this heat into an outside core, typically for the production of electricity. The designer provides for a nuclear core in a container through which a cooling fluid is pumped. This fluid may be used directly to drive a turbine generator. alternately, it may be used to heat a secondary fluid which drives the turbine. In most all the commercial systems that fluid is vaporized water.
Fission is the term used to describe the splitting of a heavy nucleus into two or more smaller nuclei. Slow moving neutrons are more easily captured by the nucleus. A moderator is a medium which causes neutrons to travel more slowly.Graphite, heavy water, and beryllium are all excellent moderators, capable of slowing neutrons without absorbing them. The neutrons liberated by fission travel very quickly unless moderated. A very large amount of energy is released when an atom undergoes fission.
In a typical fission reaction, the energy released is distributed as follows: 170 MeV(megavolt) of kinetic energy of fission fragments, 5 MeV of kinetic energy of neutrons, 15 MeV of energy beta particles and gamma rays, and 10 MeV as energy of antineutrinos.
An example of a typical fission is: Mass is not conserved in a nuclear reaction. The products formed during nuclear fission have a slightly lower mass, due to the nuclear mass defect. This nuclear mass defect can be used to determine the nuclear binding energy which held the heavier nucleus together and was released when fission occurred. The energy released by a fission can be calculated by finding the difference between the mass of the parent atom and neutron, and the masses of the daughter atoms and emitted neutrons, and converting this mass "loss" into energy using . Neutrons released when an atom undergoes fission are capable of causing other nuclei to undergo fission, if the neutrons are slowed down by a moderator. A sustained fission reaction caused in this way is called a chain reaction. Natural uranium ore contains about 0.7% uranium-235. To increase the likelihood of sustaining a chain reaction for uranium, the fissionable isotope of uranium must be increased in it!
s relative proportion through enrichment.An Isotope is one of two or more atoms of an element that differ in the number of neutrons found in the nucleus. A nuclear reactor produces a sustained chain reaction at a controlled rate. The heat energy produced by the reaction is used to drive turbines, generating electricity. Control rods, made of materials such as cadmium which absorb neutrons, are used to control the rate of a chain reaction in a nuclear reactor. A critical mass of fissionable material is the minimum mass that will produce a nuclear explosion. To produce a sustainable nuclear chain reaction requires more material than the critical mass.
Most Reactors today use uranium, bundled in the form of uranium oxide fuel pellets, to produce electricity The refined uranium oxide fuel pellets are stacked into cylindrical rods. The rods are arranged into a fuel bundle which is then ready to be placed in special pressure tubes inside the reactor. The reactor vessel is called the calandria. Nuclear reactors can not explode like a nuclear bomb. Even under a worst-case scenario, with a core meltdown, a critical mass of fuel would not be present and the fuel would burn into the ground. (This, of course, would lead to very serious consequences, including possible loss of life and environmental damage.) Refuelling can be done by removing fuel bundles from the pressure tubes and replacing them with new bundles. Heavy water is used as the moderator in a reactor. Heavy water contains deuterium, an isotope of hydrogen having one neutron in the nucleus. Heavy water also transfers heat from the fuel into a heat exchanger which heats!
ordinary water to produce steam. The steam produced is used to turn turbines which are connected to electric generators. Condensers change the steam back into water so it can be cycled back to the steam generator. If excess heat builds up in the calandria, the heavy water can be drained out. This causes the chain reaction to stop, because the moderator is no longer present. Supporters of the use of nuclear energy feel that it is a safe and effective way to produce energy. With the demand for energy Increasing, and the problems associated with burning fossil fuels, such as acid precipitation and the greenhouse effect, they regard the use of nuclear energy as being necessary.
Nuclear energy avoids some of the problems of generating hydro-electric power. Flooding land to build dams creates environmental and social problems. The use of nuclear energy may avoid the need for long transmission lines. Nuclear plants can be built in relatively close proximity to where the power is needed. Nuclear energy produces very small amounts of waste by volume. The radioactive materials can be concentrated for storage and monitoring in one place. Poisonous metals (such as arsenic, lead, and mercury), toxic gases, carbon dioxide, and fly ash are not released into the atmosphere. Critics of the use of nuclear energy cite various problems with its use. The opposition to the use of nuclear energy has grown so strong in recent years, that some reactors have been shut down. Other reactors scheduled for development have been delayed or were never completed because of the social and political pressure exerted by the antinuclear lobby. The debate continues. The Chernobyl n!
uclear accident lead to a justifiable scepticism about any claims of the safety of nuclear reactors, particularly if those claims come from spokespersons of the industry, who often cite the strict controls and regulations faced by the industry. Used nuclear fuel is both hot and radioactive. It is stored under water in large cooling pools for up to two years after use, until it cools. Some of the used fuel will still remain radioactive for up to several thousand years. This concerns many people. The storage of used fuel is a contentious issue for those concerned about the protection of the environment. No ideal solution has yet been developed to dispose the waste. Current proposals for waste management merely offer temporary storage solutions until better methods become available. Storage of waste in underground salt mines offers one possible solution. Arguments for or against the use of nuclear energy should be based on reason?not emotion. One needs to remain open-minded, list!
ening carefully to the arguments presented by those who hold a different position. If one examines the uses of energy since before the Industrial Revolution, it becomes apparent that the major source used has changed throughout time, based on economics, the development of new technologies, and a variety of other factors. Some of these same factors are at work today, determining which sources of energy will be most advantageous to use in the future. A concern for the protection of the environment needs to play a prominent role whenever decisions which might have an adverse affect on the environment are being considered. Alternative solutions to problems need to be examined with regard to their environmental impact. One very important strategy is to promote conservation. Instead of demanding more and more energy, at the expense of the environment and our resources, individuals, institutions, and government all have to search for ways to conserve energy. If everyone strives to us!
e energy wisely, existing resources will last longer. Less damage to the environment will occur.


This essay is only for research purposes. If used, be sure to cite it properly!
 

SixFive

bonswa
Forum Member
Mar 12, 2001
18,728
238
63
53
BG, KY, USA
Name: Anonymous
Submitted: 08.29.01
Flesch-Kincaid Score: 51.902157947755 ?
Word Count: 2102

"This site is hellacious and outstanding!!"

Chemical Reactions


Chemical reactions are the heart of chemistry. People have always known that they exist. The Ancient Greeks were the firsts to speculate on the composition of matter. They thought that it was possible that individual particles made up matter.

Later, in the Seventeenth Century, a German chemist named Georg Ernst Stahl was the first to postulate on chemical reaction, specifically, combustion. He said that a substance called phlogiston escaped into the air from all substances during combustion. He explained that a burning candle would go out if a candle snuffer was put over it because the air inside the snuffer became saturated with phlogiston. According to his ideas, wood is made up of phlogiston and ash, because only ash is left after combustion. His ideas soon came upon some contradiction. When metal is burned, its ash has a greater mass than the original substance. Stahl tried to cover himself by saying that phlogiston will take away from a substance?s mass or that it had a negative mass, which contradicted his original theories.

In the Eighteenth Century Antoine-Laurent Lavoisier, in France, discovered an important detail in the understanding of the chemical reaction combustion, oxigine (oxygen). He said that combustion was a chemical reaction involving oxygen and another combustible substance, such as wood.

John Dalton, in the early Nineteenth Century, discovered the atom. It gave way to the idea that a chemical reaction was actually the rearrangement of groups of atoms called molecules. Dalton also said that the appearance and disappearance of properties meant that the atomic composition dictated the appearance of different properties. He also came up with idea that a molecule of one substance is exactly the same as any other molecule of the same substance.

People like Joseph-Lois Gay-Lussac added to Dalton?s concepts with the postulate that the volumes of gasses that react with each other are related (14 grams of nitrogen reacted with exactly three grams of hydrogen, eight grams of oxygen reacted to exactly one gram of hydrogen, etc.)

Amedeo Avogadro also added to the understanding of chemical reactions. He said that all gasses at the same pressure, volume and temperature contain the same number of particles. This idea took a long time to be accepted. His ideas lead to the subscripts used in the formulas for gasses.

From the work of these and many other chemists, we now have a mostly complete knowledge of chemical reactions. There are now many classification systems to classify the different types of reactions. These include decomposition, polymerization, chain reactions, substitute reactions, elimination reactions, addition reactions, ionic reactions, and oxidation-reduction reactions.

Decomposition reactions are reactions in which a substance breaks into smaller parts. As an example, ammonium carbonate will decompose into ammonia, carbon dioxide, and water. Polymerization reactions are reactions in which simpler substances combine to form a complex substance. The thing that makes this reaction unusual is that the final product is composed of hundreds of the simpler reagent (a substance that contributes to a chemical reaction) species. One example is the polymerization of terephthalic acid with ethylene glycol to form the polymer called Dacron, a fibre, or Mylar, in sheet form:

nH2OC(C6H4)CO2H + nHOCH2CH2OH -> [...OC(C6H4)CO2CH2CH2O...]n 2nH2O

in which n is a large number of moles. A chain reaction is a series of smaller reactions in which the previous reaction forms a reagent for the next reaction. The synthesis of hydrogen bromide is a good example:

H2 + Br2 -> 2HBr

This is a simple equation that doesn?t properly prove the reaction. It is very complex and starts with this:

Br2 -> 2Br

The next three reactions are related and should be grouped together. A substation reaction is a reaction in which a substance loses one or more atoms and replaces them with the same number of atoms of another element from another substance. Here is the example of chloroform that reacts with antimony triflouride:

CHCl3 + SbF3 -> CHClF2

An elimination reaction is a reaction in which a compound is broken into smaller parts when heated. Here is an example when the same substance is heated and goes through another reaction:

2CHClF2 -> C2F4 + 2HCl

An addition reaction is a reaction in which atoms are added to a molecule. If the added atoms are hydrogens, then the reaction is called a hydrogenization reaction. If Oleic acid is hydrogenized, this what you get:

C18H34O2 + H2 -> C18H36O2

Another reaction is called an ionic reaction. It occurs between two ions and can happen very quickly. For example, when silver nitrate and sodium chloride are mixed you get silver chloride:

AgNO3 + NaCl -> AgCl + NaNO3

The last type of reaction is called oxidation-reduction.

These are reactions that involve a change in oxidation number. It is a reaction if the oxidation number goes up. It is a reduction reaction if the oxidation number goes down.

It is now known that there are three types of chemical reactions. They are classified into three types: exoergic (exothermic), endoergic (endothermic), and aergic (athermic). In these cases, energy is supplied, but the different types of reactions initiate the energy differently.

Exoergic, or exothermic, reactions release energy during the reaction. Combustion is one of the major reactions that do this. The burning of wood, or any other fuel, gives off heat, and the burning of glucose in our bodies gives off both energy and heat.

Endoergic, or endothermic, reactions absorb energy during the reaction. The melting of an ice cube is an example of an endothermic reaction.

Aergic, or athermic, reactions neither give off nor absorb energy. There are very few cases in which this happens.

There are some things that must be considered in a chemical reaction. Kinetics is one of these things. Kinetics decides The speed of the reaction and what is happening on a molecular level. There are a few things that decide the course and speed of the reaction.

The first thing is the reactants. Different reactants react at different speeds. Even the position of the reactants will affect the reaction rate.

The next thing is the catalyst that contributes a needed substance to the reaction. It Is part of the energy considerations. The catalyst is an outside substance that is included in the reaction, but is not consumed during the reaction like the reactants are. They cannot make impossible reactions occur, they only contribute to the reaction to increase the reaction rate. There are also such things as negative catalysts, or inhibitors. Inhibitors retard the reaction rate. This is also a way to control reactions. A good example in nature of a catalyst is in a firefly. The reaction that releases the light is complex. Lucifern, which the firefly makes naturally, is oxidized in the presence of luciferase, another natural enzyme, which acts as a catalyst in the reaction. Thus, the reaction makes an excited form of luciferase, which soon returns to its original state. Energy as light is released when the lucifrase returns to its normal state. The insect can easily control this reaction with an inhibitor it naturally makes.

Another contributor in this consideration is entropy. It is the measure of energy not available for work in the reaction that becomes energy moved to disorder. Entropy is simply a measurement of unusable energy in a closed thermodynamic system.

An acid and base reaction is another thing to consider. Acids and bases react very readily to each other. When an acid and a base react, they form water and a salt.

Acids and bases neutralize each other and form a salt as a byproduct. This reaction reaches what is called equilibria, (When a substance is completely neutral in charge and acidity).

One example of how acids and bases react is the reaction of calcium hydroxide and phosphoric acid to produce calcium phosphate and water:

3Ca(OH)2 + 2H3PO4 -> Ca3(PO4)2 + 6H2O


The last detail is the reaction conditions. The temperature, humidity, and barometric pressure will affect the reaction. Even a slight change in any one of these could change the reaction.

There are many branches of Chemistry that use chemical reactions, infact, almost all of them. Here are some examples.

Photochemistry is one branch of chemistry that deals with chemical reactions. It has to do with the radiant energy of all kinds formed during chemical reactions. Photochemists will experiment with chemical reactions. They will perform reactions normally only possible at high temperatures in room temperature under ultra-violet radiation. The reaction rate can be controlled for observation by varying the intensity of the radiation. X-rays and gamma rays are commonly used in these procedures. The most important photochemical reaction is photosynthesis. Carbon-dioxide and water combine with chlorophyll as a catalyst to give off oxygen. Photochemical reactions are caused by photons that are given off by the light source. The reactant molecules absorb the photons and get excited. They are at such an excited state, they can decompose, ionize, cause a reaction with other molecules, or give off heat.

Another science that uses chemical reactions is Biochemistry. They use them to produce products that a person either can?t produce or cannot do as well as they should. The best example of this the production of insulin. It was first produced in very tiny beads until someone realized that the body does in a very similar way. The person was Robert B. Merrifeild. He was the first to urge scientists to study living systems for the answers to problems that could be solved with synthesizing chemical reactions in the body. This was actually the first step toward the development of bionics.

Scientists today are still toying with chemical reactions. They are trying to control them with lasers. Scientists are trying to use lasers to prod a chemical reaction that could go one way or another, the way they want it to. They want to direct the molecules in one direction. The control of photons to excite molecules and cause reactions has been elusive. Recently, though, chemist Robert J. Gordon at the University of Illinois achieved "coherent phase control of hydrogen disulfide molecules by firing ultraviolet lasers of different wavelengths at them." Laser chemistry looks promising and is a way that chemistry is still being expanded. Again, chemical reactions are the main part of a branch of chemistry.

Here again, scientists are playing with chemical reactions. In April of 1995, a chemist named Peter Schultz and a physicist named Paul McEuen of the University of California at Berkly announced that they could control chemical reactions molecule by molecule. "The key to the technique is to put a dab of platinum on the microscopic tip of an atomic force microscope. (The tip of such a microscope is a tiny cantilever that rides like a phonograph needle just above the surface of a sample and reacts to forces exerted by the electrons beneath it.)" The Platinum acts like a catalyst, stimulating a reaction between two reactants, just stimulating a reaction one molecule at a time. The molecules are stimulated in a pattern giving the wanted results. This discovery opens doors for nanoengineering and material sciences. It gives a good view of what happens, one molecule at a time.

Chemical reactions are a large part of chemistry. This paper is an overveiw of that extensive subject. It gives a good idea about the history of chemical reactions as well as the future. Hopefully, there will be no end to the expansion of chemistry and our knowledge. Since Scientists are still experimenting, chemical reactions will always be a part of chemistry.

Bibliography


"Chemical Reactions," Encyclopedia Brittanica MACROPEDIA, 1995, Vol. 15

"Dances With molecules," Science News, Vol. 147, May 27, 1995

Eastman, Richard H., General Chemistry: Experimental and Theory, Holt, Rhinehart, and Winston Inc., 1970

"One Molecule at a Time", Discover, January 1996

Pauling, Linus and Peter, Chemistry, W. H. Freeman and Co., 1975

"Reactions, Chemical," Encyclopedia Americana, 1982, Vol. 23

"Reactions, Chemical," Academic American Encyclopedia, 1991, Vol. 16

This essay is only for research purposes. If used, be sure to cite it properly!
 

SixFive

bonswa
Forum Member
Mar 12, 2001
18,728
238
63
53
BG, KY, USA
Name: Anonymous
Submitted: 08.29.01
Flesch-Kincaid Score: 65.275605891106 ?
Word Count: 2521

"This site is hellacious and outstanding!!"

Plutonium, Our Country's Only Feasible Solution


Abstract:
Should we begin to manufacture one of the most destructive and infamous
substances on the face on the Earth once again? The engineers say yes, but
the public says no. The United States stopped making this element with the
ban on manufacturing nuclear weapons. But with the continuing problem with
our ever diminishing energy sources, some want us to begin using more
nuclear energy and less energy from natural resources. This paper is going
to discuss what plutonium is, the advantages and disadvantages of its use,
and why we should think about restarting our production of this useful
element.

After the United States dropped "Fat Man" and "Little Boy" on Japan ending
World War II, the public has had some type of understanding about the
power of plutonium and its devastating properties, but that is all anyone
heard.
After WWII, Americans started to think about what the atomic bomb could do
to the U.S. and its people. When anyone mentioned plutonium or the word
"nuclear" the idea of Hiroshima or Nagasaki being destroyed was the first
thing people thought about. No one could even ponder the idea that it
could be used for other more constructive things like sources of energy or
to kept a person's heart beating. Then we started to build more reactors
and produce more of the substance but mostly for our nuclear weapons
programs.
Along with reactors, sometimes comes a meltdown which can produce harmful
effects if it isn't controlled quickly enough. After such instances as the
Hanford, Washington reactor meltdown and the accident in the U.S.S.R. at
the Chernobyl site, no one wanted to hear about the use of plutonium. The
United States government banned nuclear testing and also ended the
production of plutonium.(Ref. 5) Now we are in a dilemma.
We are in need of future sources of energy to power our nation. We are
running out of coal and oil to run our power plants.(Ref. 7) We also need
it to further our space exploration program. People need to understand the
advantages to using plutonium and that the disadvantages are not as
catastrophic as they seem. With the turn of the century on its way, the
reemergence of plutonium production will need to be a reality for us to
continue our way of life.
In 1941, a scientist at the University of California, Berkeley, discovered
something that would change our planet forever. The man's name, Glenn T.
Seaborg, and what did he discover?, the element plutonium.(ref. 10)
Plutonium, or Pu #94 on the periodic table, is one of the most unstable
elements on the earth. It is formed when Uranium 235, another highly
unstable element, absorbs a neutron. Plutonium is a silvery-white metal
that has a very high density of 19.816 g/cm3.(ref. 10 ) It has been
rarely found in the earth's crust but the majority of the substance has to
be produced in the cores of nuclear reactors.
Plutonium can be found fifteen different forms, or isotopes and their mass
number can range from 232-246.(ref. 13) Radionuclide batteries used in
pacemakers use Pu-238, while Pu-239 is used in reactors and for Nuclear
weapons.(ref. 13) This paper will focus on the isotopes Pu-238 and Pu-239.
Plutonium can be very advantageous for the United States. It can be used
for several purposes. The three major advantages to using this element are
for an energy source, power for nuclear propulsion in space exploration
and thermo-electric generators in cardiac pacemakers.
The first use for plutonium, nuclear power, is obviously the most
beneficial use. Plutonium 239 can be used to power nuclear reactors. The
average nuclear reactor contains about 325 kilograms of plutonium within
its uranium fuel.(ref. 7) This complements the uranium fission process.
With the continually decreasing supply of coal and oil to power our
nation, we need a substitute to complement our energy needs and right now
the best replacement is that of nuclear energy.(ref. 7) At the moment
there are one hundred and ten nuclear power plants in the United States
and they produce one-fifth of the nations electricity. Nuclear energy has
been proven to be the cheapest, safest, cleanest and probably the most
efficient source of energy.(ref. 7)
Nuclear power plants do not use as much fuel as the plants burning coal
and
oil. One ton of uranium produces more energy than several million tons of
coal and plutonium can produce much more energy than uranium.(ref. 12)
Also the burning of coal and oil pollute our air and the last thing we
need is more pollution to worsen the greenhouse effect.
Nuclear power plants cannot contaminate the environment because they do
not release any type of pollution.(ref. 2) Plutonium can also be recycled
by using a enrichment process. This will produce even more energy. Coal
and oil can not be recycled. What
is left by their uses is what has been contaminating our atmosphere since
the 1800's.
You might ask how exactly is plutonium converted into an energy source?
Well it is obviously quite complicated to explain. Basically, power comes
from the fission process of an atom of the element and produces over ten
million times the energy produced by an atom of carbon from coal. One
kilogram of plutonium consumed for three years in a reactor can produce
heat to give ten million kilowatt-hours of electricity. This amount is
enough to power over one-thousand Australian households.(ref. 7)
Presented with this information, it is only common sense that we should
not depend upon fossil fuels to take us into the 21st century. It is
obvious that our future lies in the hands of nuclear reactors and the use
of plutonium.
The second major use for plutonium is for space exploration with its
ability to
power nuclear propulsion. Nuclear electric propulsion is using energy from
plutonium to power space vehicles.(ref. 3) One of the major goals of NASA
space program is to, one day, get to Mars, and it looks like the only way
it is going to happen in our current fiscal condition, is if we use
plutonium, instead of chemical fuel, to power our explorations. Nuclear
electric propulsion can be defined as using small plutonium based bricks,
to power space vehicles for interplanetary trips. Nuclear electric systems
provide very low thrust levels and use only very small amounts of fuel
during the voyage.(ref. 3,4) Using electric propulsion also allows the use
of less fuel making the spacecrafts launch weight much lower than it would
be with chemical fuel.(ref. 3)
The last beneficial use for plutonium is for cardiac pacemakers. The
thermo-electric generator which is powered by radionuclide batteries that
powers the pacemaker uses Pu-238.
One of the obvious uses of plutonium, whether is an advantage or
disadvantage, is for weaponry. It is an advantage if we need to use it
against a foe, but it is disadvantageous is our foes use it against the
United States.
Now that we are at the hands of the Non-proliferation Treaty and the Test
Ban Treaty, we no longer can make and/or test nuclear weapons.(ref. 5)
This should help end ideas about nuclear war and other disadvantages to
having plutonium in other countrys' supplies. Now that we have recognized
three important uses for Plutonium and that the threat of nuclear war is
no longer as feasible as before, we should recognize the disadvantages of
this great energy source. They mostly have to do with excess waste and
health effects from the use of nuclear energy.
In 1986, a reactor located in Russia at the Chernobyl power plant had a
meltdown and radiation escaped from the plant.(ref. 8) Several dozen died
from this incident. Nuclear explosions produce radiation. When it comes
within human contact, radiation hurts cells which can sicken people. The
cause of the Chernobyl meltdown was mostly because of human error. They
tried to perform an experiment at a time when they shouldn't have, and
many people paid for their incompetence.
There are waste disposal problems that occur with the use of nuclear
reactors. Waste also produces radiation which can be lethal. Since waste
can hurt and kill people who come in contact with the substance, it cannot
be thrown away in a dumpster like other garbage. Waste has to be put in
cooling pools or storage tanks at the site of the reactors. Another
problem is that the reactors can last for a maximum of fifty years. Even
though plutonium is chemically hazardous and produces harmful radiation,
it isn't close to being the most toxic substance on the planet. Such
substances as caffeine or radiation from smoke detectors, that have the
same amount of mass as plutonium, can have a greater toxicity.(ref. 2)
There are basically three ways plutonium can hurt humans. The first is
ingestion. Ingestion, though not totally safe, it is not as bad as we
think. The fact is, plutonium passes through the stomach and intestines
and cannot be absorbed and therefore, is released with other waste we
produce.(ref. 1)
The second route plutonium can take to be hazardous is through open
wounds. This form of contact is very rare and basically cannot happen if
the element is handled correctly with protective measures such as correct
clothing and health monitor procedures.(ref. 1)
The last, main threat to our society comes from inhalation. If inhaled,
plutonium is exhaled on the next breath or gotten rid off through the
mucous flow from the throat and bronchial system and released as with
ingestion. However, some could get trapped and put into the blood stream
or lymph nodes.(ref. 1) This has the possibility to cause cancer in the
future. This might sound frightening, but what we need to realize is that
inhaling this
type of substance is part of some of our daily lives.
The problem of inhaling Pu-239 isn't much different than inhaling such
radionuclides like decaying particles from radon. Radon is a radioactive
gas that can cause
cancer.(ref. 6) It comes from the decay of uranium in soil, rock and
water. Inhaling this substance can damage your lungs and lead to cancer
over a lifetime. Everyone who lives in homes, works in offices or goes to
school, can be affected by the gas. If you live in a brick house, you
could be taking a serious risk if you don't get the radon level tested. A
1990 National Safety Council report showed that radon causes, on the
average, approximately 14,000 deaths a year and can go as high as 30,000
deaths a year.(ref. 6)
After learning about what radon gas can do to humans, shouldn't we be more
concerned about what a naturally occurring substance can do rather than
worrying about what plutonium, and its rare contamination might do. Also,
how many American citizens will actually have a chance to come in contact
with any plutonium isotope in their life time?
As you can see, if we start to produce plutonium once again, we will
benefit greatly from its use. We can use it to help power nuclear reactors
which can power our nation. It can also be recycled and used once again
which is one thing fossil fuels cannot do. Nuclear electric propulsion and
its use of plutonium will help power space exploration into the next
century and maybe even get us to Mars. Pu-238 is also helpful in powering
cardiac pacemakers, one of the great biomedical inventions of the1900's.
With these constructive and productive uses, we shouldn't even debate on
the fact that we need plutonium for the future. You may think that by
producing plutonium, it will automatically go toward our nuclear weapons
program. With non-proliferation and testing banned, this, obviously, is no
longer an option. What about nuclear waste and radiation exposure? Well,
unless an individual does not use safety precautions and other preventive
measures when and if he handles the substance, he or she shouldn't expect
anything
less of radiation poisoning and contamination.
If you're still concerned about exposure to nuclear radiation, you're in
for a big surprise when you find out you can't avoid it. There is more of
a chance you will die from
radon gas than there is from plutonium.(ref. 6) After considering all
these factors, whether they are advantages or disadvantages, it is obvious
that the use of plutonium is, in fact, feasible and the disadvantages are
highly unlikely to affect your health and well being. You probably should
be more worried about dying in an automobile accident or a plane crash.
References

1. ans.neep.wise.edu/~ans/point_source/AEI/may95/plutonium_eff.html
(AEI: May 1995, How Deadly is Plutonium)

2. laplace.ee.latrobe.edu.au:8080/~kh...statements/perspectives
-on-plutonium.html
(A Perspective on the Dangers of Plutonium)

3. letrs.nasa.gov/cgi-bin/LeTRS/browse.pl?1994/E-8242.html
(Nuclear Electric Propulsion)

4. spacelink.msfc.nasa.gov/NASA.
Proje...icles/Proposes.Sysytems/Nuclear.Propulsion
( NASA fact sheet, Dec. 1991)

5. tqd.advanced.org/3471.nuclear_politics_body.html (Nuclear Politics)

6. www.epa.gov/docs?RadonPubs/citquide.txt.html (Citizen's Guide to Radon)

7. www-formal.stanford.edu/jmc/progress/nuclear-faq.html
(Questions about Nuclear Energy)

8. www.ieer.org/ieer/fctsheet/fm_hlth.html
(IEER: Fissile Materials Health & Environmental Dangers)

9. www.nucmet.com/CompOver.html (NMI Company Overview)

10. www.teleport.com/~aaugiee/plu.htm (Background on Pu-238/239)

11. www.uilondon.org/nfc.html (The Nuclear Fuel Cycle)

12. www.uilondon.org/ci3_plu.html (Core Issues no.3, The Uranium Institute
1995)

13. www.uic.com.au/nip18.htm (Plutonium)



This essay is only for research purposes. If used, be sure to cite it properly!
 

SixFive

bonswa
Forum Member
Mar 12, 2001
18,728
238
63
53
BG, KY, USA
Name: Anonymous
Submitted: 08.29.01
Flesch-Kincaid Score: 65.753083140664 ?
Word Count: 1958

"This site is hellacious and outstanding!!"

Salinity Changes


I chose to experiment with the effects of salinity changes on the polychaete, Nereis succinea. Along with the other members of the group, Patty and Jeremy, I was curious to see whether the worms would engage in adaptive behavior when placed in a tank of water of foreign salinity, or whether they would simply continue changing osmotically until they reached equilibrium with the environment.

The first step in our experiment was to simply observe the worms and get a "feel" for the ways in which they act. We did this on Wednesday, May 7, 1997 from 9:30am to 10:30am. Also on this day we learned how to mix and measure salinity, practiced weighing the worms, and deciding our exact schedule as far as when we would come in and for how long, etc.

From what I observed, the polychaete is a salt-water worm that has adapted to live in estuaries. We kept the control tank at 20 parts per thousand to 24 parts per thousand, and the worms seemed very content and healthy at that level. The worms on which we experimented ranged in size from approximately four inches to approximately six inches. They weighed from 1.8 grams to 4.6 grams at the beginning of the experiment. They have a pinkish, almost salmon color to them, and on two opposite sides, they have these crimson hairs lined up in a row, stretching the entire length of their bodies (the hairs are less than an eighth of an inch long). If we were to call the two lines of hair "east and west", then on the "north and south" sides, there were dark lines that also stretched the entire length of their bodies. These were their primary blood vessels, and though we tried to locate the pulse that is supposed to conspicuously travel up and down this vessel, we were not able to l!

ocate it, except once on one worm for less than 30 seconds. Also I often was not able to tell the difference between the head and the tail.

Their actions were very basic. They seemed to like to stay still for the most part, hiding underneath the little bit of seaweed we put in the tank. We also put a glass tube at the bottom of the tank, thinking that they might try to crawl in there for safety, but we never saw them in there. Basically, they remained very still, except for certain instances in which they seemed to start flailing uncontrollably. They would start swimming around in circles or in figure eights or in some other odd pattern. It was actually quite hilarious to watch. I was not quite sure why they did that, but I guessed that they were looking for something. I later found out that that was true, that they were looking for some sort of protection (like the seaweed).

I made another very shocking and interesting discovery the first time I took a worm out to weigh it. I took it out with a net and put it on a paper towel, and as I was walking to the scale, this "thing" jumped out at me from inside the worm (I literally almost dropped the poor guy!). The only way I can really explain it is if you take a sock and turn it inside-out. The worm basically extended its body by "unfolding" this unknown thing from inside. After the initial scare, I later come to realize that this is called the "reversible probascis" or something to that effect. I learned that the worm uses it to catch small fish when it is hiding in some seaweed. I also observed it later and found little teeth on the end of the probascis. That basically sums up the activity that I noticed.

After observing the worms, I formulated the hypothesis that, when facing a change in salinity, the worms would adapt osmotically to the environment and their volumes would change, but they would not make any efforts to re-adapt back to their original volumes. The reasons I formulated this hypothesis were quite frankly less than scientifically stable. When I looked at the worms, I saw a very basic physiology, and I suppose I figured that a basic physiology like that would be less capable of engaging in re-adaptive processes. I know that this hypothesis was based on a whim, but that is honestly how I came to it. I really do not have an excessively scientific background, so I am not overly aware of all the factors that go into a process like this. So my hypothesis was based on a general conjecture. Also I had heard that some of these worms have a tendency to lacerate under low salinity conditions, so I figured that would not support a re-adaptive hypothesis.

We began the experiment on Thursday, May 8, 1997. We came in at 7:30am to mix the salts and set everything up. The control tank was at 24 parts per thousand. We decided to put three worms (named Goliath, Louie, and Pedro) in 32 parts per thousand and three worms (named Boris, Jenny, and Dopey) in ten parts per thousand. We started weighing at 8:10am. I picked them up with my bare hands (what a stud I am!), Jeremy dried them off with a paper towel and put them in the container on the scale, and Patty recorded the time and weight. We also made sure to dry off the container after every use to make sure that the excess water did not get calculated with the worm?s weight. We weighed all the worms every half hour until approximately 10:45am, when Jeremy and I had to leave. Patty stayed and continued to weigh the worms, but only every hour rather than half hour, because the rate of their changing had begun to slow down. She stayed and weighed the last worm at 1:45pm. Then !

she returned at 4:00pm to weigh them once more. By this time, of the ten parts per thousand worms, Goliath continued growing (he was a whopping 8.2 grams), Louie had leveled off at 3.4 grams, and Pedro was dead. All the 32 parts per thousand worms had basically leveled off.

I came in the next day (Friday, May 9, 1997) and started weighing them at 11:03am. Of the low salinity worms, Goliath popped and died, Pedro was still dead (obviously), and Louie decreased one-tenth of a gram. Of the high salinity worms, Jenny and Dopey remained the same as the day before and Boris decreased one-tenth of a gram.

I then came in on Monday, May 12, 1997, and weighed them at 10:35am. Over the weekend, the last remaining low salinity worm, Louie, looked as if he was dead too. He was all bloody, and the water in his bowl was murky, so I figured he was dead, but then I saw him moving. He was in bad shape but still alive. So I weighed him, and he had decreased 1.1 grams. Of the high salinity worms, Boris was dead, and Jenny and Dopey had continued to decrease in volume (Jenny: -0.3 and Dopey: -0.6). Then I put all the worms that were still alive back in the control tank. I then threw away the dead worms and rinsed out all the bowls. We were planning on repeating the experiment on Tuesday, May 13, 1997, but most of the worms were dead when we got there.

So what happened? The changes in volumes were caused by osmosis. Osmosis is the passing of water through a semi-permeable membrane in order to reach equilibrium. Equilibrium is something that is naturally strived for, and when a polychaete?s body weight remains constant, equilibrium has been reached. When a worm is in a constant salinity, say 24 parts per thousand, the level of solute in the worm?s body is the same as the level of solute of the water it occupies; there is equilibrium. When the worm is removed from that environment and placed in a different one, that equilibrium is no longer present, and by laws of nature, something must happen to re-equalize. That is where osmosis comes in. When we put the worms in the water with low salinity (ten parts per thousand), they increased in volume. This happened because there was more solute on the inside of the worm than in the water. Solute cannot escape the semi-permeable membrane, so the only option is for water to en!

ter the worm to dilute it, to make the solute concentration less dense. When the concentration of solute is the same in the worm and in the water, no more water will enter, and equilibrium will have been reached. In this case, equilibrium was never reached. The salinity was so low that water kept entering the worms, and the worms got bigger and bigger, until they popped, because their epidermis could not expand any further.

The opposite is true of the worms placed in higher salinity. The concentration of solute in their bodies was less than that of the water, so it expelled water to make its own concentration more dense. Again, this happens until equilibrium is reached, and in this experiment, it appeared for a moment as if that occurred, but the worms either died or continued decreasing in volume.

Looking at the data, Goliath met his demise in a very basic way. First of all, he was huge to begin with (4.6 grams), and he just continued increasing in volume until he exploded. Pedro continued increasing, and then right before he died, his weight decreased half a gram. I am not sure why that happened; it is possible that right before he died, he lost some fluid from a laceration. Louie really confused me. For almost four hours (and probably more) on Thursday, Louie remained constant at 3.4 grams. It looked like he had reached equilibrium, and then on the next day, he decreased one tenth of a gram, so maybe he was re-adapting. Then on Monday, he decreased 1.1 grams. So then I figured that he was definitely re-adapting. But I also realized that he was definitely lacerated and very bloody and the water was murky, and I came to the conclusion that he had lost a good amount of body fluid and blood.

As for the higher salinity worms, they basically acted how I suspected them to act. Their volumes continued decreasing. Both Boris and Jenny did have one measurement in which their weights actually increased, and I honestly do not know how to explain that. They all looked at one point as if they reached equilibrium (especially Dopey), but none of them did.

So according to these data that we collected in this experiment, it looks as if Nereis succinea, when placed in an environment with a different salinity, goes through a process of osmosis to reach equilibrium, but does not control processes to return back to its original volume.

I very much enjoyed this project, and I truly, honestly did learn a lot from it (and I?m not just saying that). If I were to do it again, I would not have made the change in salinity so great. It would have been interesting to see what would have taken place if the change in salinity were only, say, six parts per thousand higher and six parts per thousand lower. Maybe next time we?ll do that.


This essay is only for research purposes. If used, be sure to cite it properly!
 

SixFive

bonswa
Forum Member
Mar 12, 2001
18,728
238
63
53
BG, KY, USA
Name: Anonymous
Submitted: 08.29.01
Flesch-Kincaid Score: 64.012799844884 ?
Word Count: 1651

"This site is hellacious and outstanding!!"

C++ Programming


NOTES ON C++ PROGRAMMING
Module 1: Pointers and Memory Management

NOTES ON C++ PROGRAMMING
Module 1: Pointers and Memory Management

TABLE OF CONTENTS
TABLE OF CONTENTS 1
OVERVIEW 4
BASIC MEMORY MANAGEMENT 5
GROUP ASSIGNMENT 6
INITIALIZATION 8
CONSTANTS 9
INCREMENT AND DECREMENT OPERATORS 10
ELSE-IF 13
SWITCH 14
LOOPS 15
EXAMPLES OF LOOPS 16
BREAK, CONTINUE 18
RETURN 19
FUNCTION DEFINITION: 21
VOID FUNCTIONS 22
FUNCTIONS RETURNING A VALUE 23
OVERVIEW

Algorithms:

A step-by-step sequence of instructions that describes how to perform a computation.

Answers the question "What method will you use to solve this computational problem?"

Flowcharts:

Provides a pictorial representation of the algorithm using the symbols.

Structure Charts:

Provides a pictorial representation of the modules contained in the program.

Programming Style:

Standard form:

Function names starts in column 1 and is placed with the required parentheses on a line by itself.

The opening brace of the function body follows on the next line and is placed under the first letter of the function name.

The closing brace is placed by itself in column 1 as the last line of the function.

The final form of your programs should be consistent and should always serve as an aid to the reading and understanding of your programs.

Comments:

Explanatory remarks made within a program. Help clarify what the complete program is about, what a specific group of statements is meant to accomplish, or what one line is intended to do.

Top-Down Program Development:

1. Determine the desired output items that the program must produce.
2. Determine the input items
3. Design the program as follows:
a. Select an algorithm for transforming the input items into the desired outputs.
b. Check the chosen algorithm, by hand, using specific input values.
4. Code the algorithm into C.
5. Test the program using selected test data.
BASIC MEMORY MANAGEMENT

Space set aside for the variable:

Characters 1 byte (8 bits)
Pointers 4 bytes
Integers 2 bytes (16 bits) or 4 bytes (32 bits)
Short int or short 2 bytes
Unsigned int or unsigned 2 bytes
Long Integers 4 bytes
Floats 4 bytes(single precision, about 7 decimal places)
Doubles 8 bytes(double precision, about 15 decimal places)


Type Space

a) double *values; __________________ ________________________

b) long x[1000]; __________________ ________________________

c) char *s = "string"; __________________ ________________________

d) char s[] = "string"; __________________ ________________________

e) char *name [10]; __________________ ________________________

f) int y; __________________ ________________________

GROUP ASSIGNMENT

This assignment is to reinforce the idea of the big picture.
Assignment:
Your consulting firm has been hired to develop computer application(s) for a book store that will be opening in a local shopping center in 6 months. These applications will help the owner keep track of employee payroll, inventory, special orders, etc.
Your group should decide the following:
1. How many different applications do you need to write?
2. Can you use applications that have already been developed?
3. How are you going to divide up the project?

Turn in the following:
1. Structure charts for the applications you need to develop inhouse.
2. List of inputs and outputs for each applications.
3. List of variables and memory requirements for each application.

Be prepared to:
1. Describe your applications.
2. Why did you select these applications.
3. Defend your logic.
Scope

Scope of a variable is the part of the program where it can be used.

An "automatic" variable is declared at the beginning of a function or in the function?s argument list and its scope is limited to the function it is declared in. Two automatic variables of the same name but in different functions are unrelated.

An "external" variable is declared outside any function and its scope is from the point of declaration to the end of the file.



INITIALIZATION

External (and static) variables are initialized to zero by default.

Automatic variables
- contain undefined values unless they are initialized.
- lose their values when the call to the function they are declared in is over.

CONSTANTS

integer constant 1 345 -10
character constant 'a' 't' (in single quotes)
real constant 2.3 3e10 .12E-5
string constant "abc" "a" (in double quotes)

Arithmetic Operators

* , /, %

+ , -

Relational Operators

<, <=, >, >=

== is equal to
!= is not equal to

Logical Operators

! NOT

&& AND

|| OR
INCREMENT AND DECREMENT OPERATORS

++, --

Assignment Operators

var op= expr

is equivalent to

var = var op expr

Conditional Expressions

expr1 ? expr2 : expr3

expr1 is first evaluated,

if expr1 is true, expr2 is evaluated

otherwise, expr3 is evaluated
TYPE CONVERSION

IMPLICIT TYPE CONVERSION

STEP 1
All `char' (and `short') variables are converted to `int'

STEP 2
1. If there are any operations with operands of different type

`lower' type is promoted to `higher' type

hierarchy: int < float < double

2. if its an assignment statement, the result is converted to the type of the assigned variable

Explicit Type Conversion

The type can be explicitly converted by `type-casting':

(type)expression

Control Flow
if-else

if (expression)
statementA;
else
statementB;


if (expression)
{
statementA1;
statementA2;
}
else
{
statementB;
}


if (expressionA)
statementA1;
if (expressionB)
statementB1;
else
statementA2;
ELSE-IF

if (expression1)
statement1;
else if (expression2)
statement2;
.
.
.
else if (expressionN)
statementN;

else
default_statement;


SWITCH

switch (integer expression)
{
case const1: statement11;
statement12;
break;
case const2: statement2;
break;
.
.
.
default: default_statement;
break;
}

LOOPS
for
while
do-while
____________________________________________________

for (i = 0; i < MAX; i++)
process(a);
____________________________________________________

i = 0;
while (i < MAX)
{
process(a);
i++;
}
____________________________________________________

i = 0;
do
{
process(a);
.
.
.
i++;.
} while (i < MAX);
EXAMPLES OF LOOPS
for (;;)
; (does nothing forever)

for (c = getchar(); c != 'n'; c = getchar() )
process(c);
___________________________________
for (i=0, j=0; i < 10; i++, j++)
process(a[j]);
___________________________________
for (i = 0; i < 10;)
{
process(a);
i++;
}
___________________________________
found = 0;
while (!found)
{
.
.
if (condition)
found = 1;
}





do
{
do something;
.
.
printf("once more?");
c = getchar();
getchar();
} while (c == 'y' || c == 'Y');
BREAK, CONTINUE


These are one word statements which alter the normal control flow within a loop.

break statement stops the current iteration and provides an exit from the innermost for, while or do-while loop. break also provides an exit from switch.

continue statement stops the current iteration and starts the next iteration of the loop. In while and do-while loops the test part is executed immediately; in for loops control passes to the increment step.



RETURN

return statement exits the called function and transfers control to the calling function.

return expression; returns the value of the expression.
return (expression); the parentheses are optional.

return; without an expression, no value is returned, only the call is terminated.
Functions

- break large tasks into smaller units
- hide details of processing from other parts of the program that don't
need to know about them

This results in:
- clarification of the code
- less interference between variables
- easier-to-change code
- easier-to-debug code
- reusable code


FUNCTION DEFINITION:

type function-name(argument declarations)
{
. . .
body of the function
. . .
}


Function definitions can occur in any order in a program.

Function definitions are distinct; i.e., you cannot define a function within another function.
VOID FUNCTIONS

int main(void)
{
int a, b;
. . .
if (a == b)
print_error();
else
print_diff(a, b);

return 0;
}

void print_diff(int v1, int v2)
{
int diff;
diff = (v1 > v2) ? v1 - v2 : v2 - v1;
printf("difference is %d", diff);
}

void print_error(void)
{
printf("error in processing");
}
FUNCTIONS RETURNING A VALUE

int main(void)
{
int a, b;
. . .
printf("difference is %d", calc_diff(a, b));

return 0;
}

int calc_diff(int v1, int v2)
{
if (v1 > v2)
return v1-v2;
else
return v2-v1;
}

This essay is only for research purposes. If used, be sure to cite it properly!
 

SixFive

bonswa
Forum Member
Mar 12, 2001
18,728
238
63
53
BG, KY, USA
Name: Anonymous
Submitted: 08.29.01
Flesch-Kincaid Score: 53.166388451028 ?
Word Count: 2748

"This site is hellacious and outstanding!!"

Computer


About two hundred years before, the word "computer" started
to appear in the dictionary. Some people even didn't know what is
a computer. However, most of the people today not just knowing
what is a computer, but understand how to use a computer.
Therefore, computer become more and more popular and
important to our society. We can use computer everywhere and
they are very useful and helpful to our life. The speed and
accuracy of computer made people felt confident and reliable.
Therefore, many important information or data are saved in the
computer. Such as your diary, the financial situation of a oil
company or some secret intelligence of the military department. A
lot of important information can be found in the memory of
computer. So, people may ask a question: Can we make sure that
the information in the computer is safe and nobody can steal it
from the memory of the computer?
Physical hazard is one of the causes of destroying the data
in the computer. For example, send a flood of coffee toward a
personal computer. The hard disk of the computer could be
endangered by the flood of coffee. Besides, human caretaker of
computer system can cause as much as harm as any physical hazard.
For example, a cashier in a bank can transfer some money from one
of his customer's account to his own account. Nonetheless, the
most dangerous thief are not those who work with computer every
day, but youthful amateurs who experiment at night --- the
hackers.
The term "hacker "may have originated at M.I.T. as students'
jargon for classmates who labored nights in the computer lab. In
the beginning, hackers are not so dangerous at all. They just
stole computer time from the university. However, in the early
1980s, hackers became a group of criminals who steal information
from other peoples' computer.
For preventing the hackers and other criminals, people need
to set up a good security system to protect the data in the
computer. The most important thing is that we cannot allow those
hackers and criminals entering our computers. It means that we
need to design a lock to lock up all our data or using
identification to verify the identity of someone seeking access
to our computers.
The most common method to lock up the data is using a
password system. Passwords are a multi-user computer system's
usual first line of defense against hackers. We can use a
combination of alphabetic and number characters to form our own
password. The longer the password, the more possibilities a
hacker's password-guessing program must work through. However it
is difficult to remember a very long passwords. So people will
try to write the password down and it may immediately make it a
security risk. Furthermore, a high speed password-guessing
program can find out a password easily. Therefore, it is not
enough for a computer that just have a password system to protect
its data and memory.
Besides password system, a computer company may consider
about the security of its information centre. In the past, people
used locks and keys to limit access to secure areas. However,
keys can be stolen or copied easily. Therefore, card-key are
designed to prevent the situation above. Three types of card-keys
are commonly used by banks, computer centers and government
departments. Each of this card-keys can employ an identifying
number or password that is encoded in the card itself, and all
are produced by techniques beyond the reach of the average
computer criminals. One of the three card-key is called watermark
magnetic. It was inspired by the watermarks on paper currency.
The card's magnetic strip have a 12-digit number code and it
cannot be copied. It can store about two thousand bits in the
magnetic strip. The other two cards have the capability of
storing thousands of times of data in the magnetic strip. They
are optical memory cards (OMCs) and Smart cards. Both of them are
always used in the security system of computers.
However, it is not enough for just using password system and
card-keys to protect the memory in the computer. A computer
system also need to have a restricting program to verify the
identity of the users. Generally, identity can be established by
something a person knows, such as a password or something a
person has, such as a card-key. However, people are often forget
their passwords or lose their keys. A third method must be used.
It is using something a person has --- physical trait of a human
being.
We can use a new technology called biometric device to
identify the person who wants to use your computer. Biometric
devices are instrument that perform mathematical analyses of
biological characteristics. For example, voices, fingerprint and
geometry of the hand can be used for identification. Nowadays,
many computer centers, bank vaults, military installations and
other sensitive areas have considered to use biometric security
system. It is because the rate of mistaken acceptance of
outsiders and the rejection of authorized insiders is extremely
low.
Individuality of vocal signature is one kind of biometric
security system. The main point of this system is voice
verification. The voice verifier described here is a
developmental system at American Telephone and Telegraph. Only
one thing that people need to do is repeating a particular phrase
several times. The computer would sample, digitize and store what
you said. After that, it will built up a voice signature and make
allowances for an individual's characteristic variations. The
theory of voice verification is very simple. It is using the
characteristics of a voice: its acoustic strength. To isolate
personal characteristics within these fluctuations, the computer
breaks the sound into its component frequencies and analyzes how
they are distributed. If someone wants to steal some information
from your computer, the person needs to have a same voice as you
and it is impossible.
Besides using voices for identification, we can use
fingerprint to verify a person's identity because no two
fingerprints are exactly alike. In a fingerprint verification
system, the user places one finger on a glass plate; light
flashes inside the machine, reflects off the fingerprint and is
picked up by an optical scanner. The scanner transmits the
information to the computer for analysis. After that, security
experts can verify the identity of that person by those
information.
Finally, the last biometric security system is the geometry
of the hand. In that system, the computer system uses a
sophisticated scanning device to record the measurements of each
person's hand. With an overhead light shining down on the hand, a
sensor underneath the plate scans the fingers through the glass
slots, recording light intensity from the fingertips to the
webbing where the fingers join the palm. After passing the
investigation of the computer, people can use the computer or
retrieve data from the computer.
Although a lot of security system have invented in our
world, they are useless if people always think that stealing
information is not a serious crime. Therefore, people need to pay
more attention on computer crime and fight against those hackers,
instead of using a lot of computer security systems to protect
the computer.
Why do we need to protect our computers ?
It is a question which people always ask in 18th century. However,
every person knows the importance and useful of a computer security
system.
In 19th century, computer become more and more important and
helpful. You can input a large amount of information or data in a small
memory chip of a personal computer. The hard disk of a computer system is
liked a bank. It contained a lot of costly material. Such as your diary,
the financial situation of a trading company or some secret military
information. Therefore, it just like hire some security guards to protect
the bank. A computer security system can use to prevent the outflow of
the information in the national defense industry or the personal diary in
your computer.
Nevertheless, there is the price that one might expect to pay for
the tool of security: equipment ranging from locks on doors to
computerized gate-keepers that stand watch against hackers, special
software that prevents employees to steal the data from the company's
computer. The bill can range from hundreds of dollars to many millions,
depending on the degree of assurance sought.
Although it needs to spend a lot of money to create a computer
security system, it worth to make it. It is because the data in a
computer can be easily erased or destroyed by a lot of kind of hazards.
For example, a power supply problem or a fire accident can destroy all
the data in a computer company. In 1987, a computer centre inside the
Pentagon, the US military's sprawling head quarters near Washington, DC.,
a 300-Watt light bulb once was left burning inside a vault where computer
tapes were stored. After a time, the bulb had generated so much heat that
the ceiling began to smelt. When the door was opened, air rushing into
the room brought the fire to life. Before the flames could be
extinguished, they had spread consume three computer systems worth a
total of $6.3 million.
Besides those accidental hazards, human is a great cause of the
outflows of data from the computer. There have two kind of people can go
in the security system and steal the data from it. One is those trusted
employee who is designed to let in the computer system, such as
programmers, operators or managers. Another kind is those youth amateurs
who experiment at night ----the hackers.
Let's talk about those trusted workers. They are the groups who can
easily become a criminal directly or indirectly. They may steal the
information in the system and sell it to someone else for a great profit.
In another hand, they may be bribed by someone who want to steal the
data. It is because it may cost a criminal far less in time and money to
bride a disloyal employee to crack the security system.
Beside those disloyal workers, hacker is also very dangerous. The
term "hacker" is originated at M.I.T. as students' jargon for classmates
who doing computer lab in the night. In the beginning, hackers are not so
dangerous at all. They just stole some hints for the test in the
university. However, in early 1980s, hacker became a group of criminal
who steal information from other commercial companies or government
departments.
What can we use to protect the computer ?
We have talked about the reasons of the use of computer security
system. But what kind of tools can we use to protect the computer. The
most common one is a password system. Password are a multi-user computer
system's which usual used for the first line of defense against
intrusion. A password may be any combination of alphabetic and numeric
characters, to maximum lengths set by the e particular system. Most
system can accommodate passwords up to 40 characters. However, a long
passwords can be easily forget. So, people may write it down and it
immediately make a security risk. Some people may use their first name or
a significant word. With a dictionary of 2000 common names, for instance,
a experienced hacker can crack it within ten minutes.
Besides the password system, card-keys are also commonly used.
Each kind of card-keys can employ an identifying number or password that
is encoded in the card itself, and all are produced by techniques beyond
the reach of the average computer criminal. Three types of card usually
used. They are magnetic watermark, Optical memory card and Smart card.
However, both of the tools can be easily knew or stole by other
people. Password are often forgotten by the users and card-key can be
copied or stolen. Therefore, we need to have a higher level of computer
security system. Biometric device is the one which have a safer
protection for the computer. It can reduce the probability of the
mistaken acceptance of outsider to extremely low. Biometric devices are
instrument that perform mathematical analyses of biological
characteristics. However, the time required to pass the system should not
be too long. Also, it should not give inconvenience to the user. For
example, the system require people to remove their shoes and socks for
footprint verification.
Individuality of vocal signature is one kind of biometry security
system. They are still in the experimental stage, reliable computer
systems for voice verification would be useful for both on-site and
remote user identification. The voice verifier described here is invented
by the developmental system at American Telephone and Telegraph.
Enrollment would require the user to repeat a particular phrase several
times. The computer would sample, digitize and store each reading of the
phrase and then, from the data, build a voice signature that would make
allowances for an individual's characteristic variations.
Another biometric device is a device which can measuring the act of
writing. The device included a biometric pen and a sensor pad. The pen
can converts a signature into a set of three electrical signals by one
pressure sensor and two acceleration sensors. The pressure sensor can
change in the writer's downward pressure on the pen point. The two
acceleration sensor can measure the vertical and horizontal movement.
The third device which we want to talk about is a device which can
scan the pattern in the eyes. This device is using an infrared beam which
can scan the retina in a circular path. The detector in the eyepiece of
the device can measure the intensity of the light as it is reflected from
different points. Because blood vessels do not absorb and reflect the
same quantities of infrared as the surrounding tissue, the eyepiece
sensor records the vessels as an intricate dark pattern against a lighter
background. The device samples light intensity at 320 points around the
path of the scan , producing a digital profile of the vessel pattern. The
enrollment can take as little as 30 seconds and verification can be even
faster. Therefore, user can pass the system quickly and the system can
reject those hackers accurately.
The last device that we want to discuss is a device which can map
the intricacies of a fingerprint. In the verification system, the user
places one finger on a glass plate; light flashes inside the machine
,reflect off the fingerprint and is picked up by an optical scanner. The
scanner transmits the information to the computer for analysis.
Although scientist have invented many kind of computer security
systems, no combination of technologies promises unbreakable security.
Experts in the field agree that someone with sufficient resources can
crack almost any computer defense. Therefore, the most important thing is
the conduct of the people. If everyone in this world have a good conduct
and behavior, there is no need to use any complicated security system to
protect the computer.


This essay is only for research purposes. If used, be sure to cite it properly!
 

SixFive

bonswa
Forum Member
Mar 12, 2001
18,728
238
63
53
BG, KY, USA
Name: Anonymous
Submitted: 08.29.01
Flesch-Kincaid Score: 57.246647574819 ?
Word Count: 1621

"This site is hellacious and outstanding!!"

Computer, Internet, Privacy


INTERNET REGULATION: POLICING CYBERSPACE

The Internet is a method of communication and a source
of information that is becoming more popular among those who
are interested in, and have the time to surf the information
superhighway. The problem with this much information being
accessible to this many people is that some of it is deemed
inappropriate for minors. The government wants censorship,
but a segment of the population does not. Legislative
regulation of the Internet would be an appropriate function
of the government.
The Communications Decency Act is an amendment which
prevents the information superhighway from becoming a
computer "red light district." On June 14, 1995, by a vote
of 84-16, the United States Senate passed the amendment. It
is now being brought through the House of Representatives.1
The Internet is owned and operated by the government,
which gives them the obligation to restrict the materials
available through it. Though it appears to have sprung up
overnight, the inspiration of free-spirited hackers, it in
fact was born in Defense Department Cold War projects of the
1950s.2 The United States Government owns the Internet and
has the responsibility to determine who uses it and how it
is used.
The government must control what information is
accessible from its agencies.

This material is not lawfully available through
the mail or over the telephone, there is no valid
reason these perverts should be allowed unimpeded
on the Internet. Since our initiative, the
industry has commendably advanced some blocking
devices, but they are not a substitute for
well-reasoned law.4
Because the Internet has become one of the biggest sources
of information in this world, legislative safeguards are
imperative.
The government gives citizens the privilege of using
the Internet, but it has never given them the right to use
it.

They seem to rationalize that the framers of the
constitution planned & plotted at great length to
make certain that above all else, the profiteering
pornographer, the pervert and the pedophile must
be free to practice their pursuits in the presence
of children on a taxpayer created and subsidized
computer network.3
People like this are the ones in the wrong. Taxpayer's
dollars are being spent bringing obscene text and graphics
into the homes of people all over the world.
The government must take control to prevent
pornographers from using the Internet however they see fit
because they are breaking laws that have existed for years.
Cyberpunks, those most popularly associated with the
Internet, are members of a rebellious society that are
polluting these networks with information containing
pornography, racism, and other forms of explicit
information.

When they start rooting around for a crime, new
cybercops are entering a pretty unfriendly
environment. Cyberspace, especially the Internet,
is full of those who embrace a frontier culture
that is hostile to authority and fearful that any
intrusions of police or government will destroy
their self-regulating world.5
The self-regulating environment desired by the cyberpunks is
an opportunity to do whatever they want. The Communications
Decency Act is an attempt on part of the government to
control their "free attitude" displayed in homepages such as
"Sex, Adult Pictures, X-Rated Porn", "Hot Sleazy Pictures
(Cum again + again)" and "sex, sex, sex. heck, it's better
even better than real sex"6. "What we are doing is simply
making the same laws, held constitutional time and time
again by the courts with regard to obscenity and indecency
through the mail and telephones, applicable to the
Internet."7 To keep these kinds of pictures off home
computers, the government must control information on the
Internet, just as it controls obscenity through the mail or
on the phone.
Legislative regulations must be made to control
information on the Internet because the displaying or
distribution of obscene material is illegal.

The courts have generally held that obscenity is
illegal under all circumstances for all ages,
while "indecency" is generally allowable to
adults, but that laws protecting children from
this "lesser" form are acceptable. It's called
protecting those among us who are children from
the vagrancies of adults.8

The constitution of the United States has set regulations to
determine what is categorized as obscenity and what is not.

In Miller vs. California, 413 U.S. at 24-25, the
court announced its "Miller Test" and held, at 29,
that its three part test constituted "concrete
guidelines to isolate 'hard core' pornography from
expression protected by the First Amendment.9

By laws previously set by the government, obscene
pornography should not be accessible on the Internet.
The government must police the Internet because people
are breaking laws. "Right now, cyberspace is like a
neighborhood without a police department."10 Currently
anyone can put anything he wants on the Internet with no
penalties. "The Communications Decency Act gives law
enforcement new tools to prosecute those who would use a
computer to make the equivalent of obscene telephone calls,
to prosecute 'electronic stalkers' who terrorize their
victims, to clamp down on electronic distributors of obscene
materials, and to enhance the chances of prosecution of
those who would provide pornography to children via a
computer."
The government must regulate the flow of information on
the Internet because some of the commercial blocking devices
used to filter this information are insufficient.
"Cybercops especially worry that outlaws are now able to use
powerful cryptography to send and receive uncrackable secret
communications and are also aided by anonymous
re-mailers."11 By using features like these it is
impossible to use blocking devices to stop children from
accessing this information. Devices set up to detect
specified strings of characters will not filter those that
it cannot read.
The government has to stop obscene materials from being
transferred via the Internet because it violates laws
dealing with interstate commerce.

It is not a valid argument that "consenting
adults" should be allowed to use the computer BBS
and "Internet" systems to receive whatever they
want. If the materials are obscene, the law can
forbid the use of means and facilities of
interstate commerce and common carriers to ship or
disseminate the obscenity.12
When supplies and information are passed over state or
national boundaries, they are subject to the laws governing
interstate and intrastate commerce. When information is
passed between two computers, it is subjected to the same
standards.
The government having the power to regulate the
information being put on the Internet is a proper extension
of its powers. With an information based system such as the
Internet there is bound to be material that is not
appropriate for minors to see. In passing of an amendment
like the Communications Decency Act, the government would be
given the power to regulate that material.

BIBLIOGRAPHY

Buerger, David. "Freedom of Speech Meets Internet Censors;
Cisco Snubs IBM." Network World. Dialog Magazine
Database, 040477. 31 Oct. 1994, 82.

Diamond, Edwin and Stephen Bates. "...And Then There Was
Usenet." American Heritage. Oct. 1995, 38.

Diamond, Edwin and Stephen Bates. "The Ancient History of
the Internet." American Heritage. Oct. 1995, 34-45.

Dyson, Esther. "Deluge of Opinions On The Information
Highway." Computerworld. Dialog Magazine Database,
035733. 28 Feb. 1994, 35.

Exon, James J. "Defending Decency on the Internet."
Lincoln Journal. 31 July 1995, 6.

Exon, James J. "Exon Decency Amendment Approved by Senate."
Jim Exon News. 14 June 1995.

Exon, James J., and Dan Coats. Letter to United States
Senators. 27 July 1995.

Gaffin, Adam. "Are Firms Liable For Employee Net Postings?"
Network World. Dialog Magazine Database, 042574. 20
Feb. 1995, 8.

Gibbs, Mark. "Congress 'Crazies' Want To Carve Up Telecom."
Network World. Dialog Magazine Database, 039436. 12
Sept. 1994, 37.

Horowitz, Mark. "Finding History On The Net." American
Heritage. Oct. 1995, 38.

Laberis, Bill. "The Price of Freedom." Computerworld.
Dialog Magazine Database, 036777. 25 Apr. 1994, 34.

Messmer, Ellen. "Fighting for Justice On The New Frontier."
Network World. Dialog Magazine Database, 028048. 11
Jan. 1993, S19."Policing Cyberspace." U.S. News & World
Report. 23 Jan. 1995, 55-60.

Messmer, Ellen. "Sen. Dole Backs New Internet Antiporn
Bill." Network World. Dialog Magazine Database,
044829. 12 June 1995, 12.

"Shifting Into The Fast Lane." U.S. News & World Report.
23 Jan. 1995, 52-53.

Taylor, Bruce A. "Memorandum of Opinion In Support Of The
Communications Decency Amendment." National Law Center
for Children & Families. 29 June 1995, 1-7.

Turner, Bob. The Internet Filter. N.p.: Turner
Investigations, Research and Communication, 1995.

"WebCrawler Search Results." Webcrawler. With the query
words magazines and sex. 13 Sept. 1995.


This essay is only for research purposes. If used, be sure to cite it properly!
 

THE KOD

Registered
Forum Member
Nov 16, 2001
42,497
260
83
Victory Lane
moon_formation.jpg

.......................................................

One of these days, jabberhead, bang boom right to the moon
 

SixFive

bonswa
Forum Member
Mar 12, 2001
18,728
238
63
53
BG, KY, USA
Name: Anonymous
Submitted: 08.29.01
Flesch-Kincaid Score: 54.177192982456 ?
Word Count: 2053

"This site is hellacious and outstanding!!"

Computers


A common misconception about computers is that they are smarter than
humans. Actually, the degree of a computer?s intelligence depends on the
speed of its ignorance. Today?s complex computers are not really
intelligent at all. The intelligence is in the people who design them.
Therefore, in order to understand the intelligence of computers, one must
first look at the history of computers, the way computers handle
information, and, finally, the methods of programming the machines.

The predecessor to today?s computers was nothing like the machines
we use today. The first known computer was Charles Babbage?s Analytical
Engine; designed in 1834. (Constable 9) It was a remarkable device for its
time. In fact, the Analytical Engine required so much power and would have
been so much more complex than the manufacturing methods of the time, it
could never be built.

No more than twenty years after Babbage?s death, Herman Hollerith
designed an electromechanical machine that used punched cards to tabulate
the 1890 U.S. Census. His tabulation machine was so successful, he formed
IBM to supply them. (Constable 11) The computers of those times worked
with gears and mechanical computation.

Unlike today?s chip computers, the first computers were
non-programmable, electromechnical machines. No one would ever confuse the
limited power of those early machines with the wonder of the human brain.
An example was the ENIAC, or Electronic Numerical Integrator and Computer.
It was a huge, room-sized machine, designed to calculate artillery firing
tables for the military. (Constable 9) ENIAC was built with more than
19,000 vacuum tubes, nine times the amount ever used prior to this. The
internal memory of ENIAC was a paltry twenty decimal numbers of ten digits
each. (Constable 12) (Today?s average home computer can hold roughly
20,480 times this amount.)

Today, the chip-based computer easily packs the power of more than
10,000 ENIACs into a silicon chip the size of an infant?s fingertip. (Reid
64) The chip itself was invented by Jack Kilby and Robert Noyce in 1958,
but their crude devices looked nothing like the sleek, paper-thin devices
common now. (Reid 66) The first integrated circuit had but four
transistors and was half an inch long and narrower than a toothpick. Chips
found in today?s PCs, such as the Motorola 68040, cram more than 1.2
million transistors onto a chip half an inch square. (Poole 136)

The ENIAC was an extremely expensive, huge and complex machine,
while PCs now are shoebox-sized gadgets costing but a few thousand
dollars. Because of the incredible miniaturization that has taken place,
and because of the seemingly ?magical? speed at which a computer
accomplishes its tasks, many people look at the computer as a replacement
for the human brain. Once again, though, the computer can only accomplish
its amazing feats by breaking down every task into its simplest possible
choices.

Of course, the computer must receive, process and store data in
order to be a useful tool. Data can be text, programs, sounds, video,
graphics, etc. Some devices for entering data are keyboards, mice,
scanners, pressure-sensitive tablets, or any instrument that tells the
computer something. The keyboard is the most popular input device for
entering text, commands, programs, and the like. (Tessler 157) Newer
computers which use a GUI (pronounced gooey), or Graphical User Interface,
utilize a mouse as the main device for entering commands. A mouse is a
small tool with at least one button on it, and a small tracking ball at
the bottom. When the mouse is slid across a surface, the ball tracks the
movement on the screen and sends the information to the computer. (Tessler
155) A pressure-sensitive tablet is mainly used by graphic artists to
easily draw with the computer. The artist uses a special pen to draw on
the large tablet, and the tablet sends the data to the computer.

Once the data is entered into the computer, it does no good until
the computer can process it. This is accomplished by the millions of
transistors compressed into the thumb-nail sized chip in the computer.
These transistors are not at all randomly placed; they form a sequence,
and together they make a circuit. A transistor alone can only turn on and
off. In the ?on? state, it will permit electricity to flow; in the ?off?
state, it will keep electricity from flowing. (Poole 136) However, when
all the microscopic transistors are interconnected, they have the ability
to control, manipulate, and move data according to the condition of other
data. A computer?s chip is so ignorant, it must use a series of sixteen
transistors and two resistors just to add two and two. (Poole 141)
Nevertheless, this calculation can be made in just a microsecond, an
example of the incredible speed of the PC. The type of chip mainly used
now is known as a CISC, or Complex Instruction Set Chip. (Constable 98)
Newer workstation variety

computers use the RISC type of chip, which stands for Reduced Instruction
Set Chip. While the ?complex? type might sound better, the architecture of
the RISC chip permits it to work faster. The first generation of CISC chip
was called SSI, or Small Scale Integration. SSI chips have fewer than one
hundred components. (Reid 124) The period of the late 1960s is known as
the era of MSI, or Medium Scale Integration. MSI chips range from one
hundred to one thousand components each. (Reid 124) LSI, or Large Scale
Integration, was used primarily in the 1970s, each chip containing up to
ten thousand components. Chips used in the 1990s are known as VLSI, or
Very Large Scale Integration, with up to a million or more components per
chip. In the not-so-distant future, ULSI, or Ultra Large Scale
Integration, will be the final limit of the miniaturization of the chip.
The transistors will then be on the atomic level and the interconnections
will be one atom apart. (Reid 124) Because further miniaturization is not
practic

al
parallel? systems that split jobs among hundreds of processors will become
common in the future.

Once data is entered and processed, it will be lost forever if it
is not stored. Computers can store information in a variety of ways. The
computer?s permanent read-only memory, which it uses for basic tasks such
as system checks, is stored in ROM, or Read Only Memory. Programs, files,
and system software are stored on either a hard disk or floppy disk in
most systems.

The hard disk and floppy disk function similarly, but hard disks
can hold much more information. They work by magnetizing and demagnetizing
small areas on a plastic or metal platter. The ?read? head then moves
along the tracks to read the binary information. When the program or file
being read is opened, it is loaded into RAM (Random Access Memory) where
it can be quickly accessed by the processor. RAM is in small chips called
SIMMs, or Single Inline Memory Modules. The speed of RAM is much faster
than a disk drive because there are no moving parts. The information is
represented by either a one or a zero, and this amount of information is
called a bit. (Constable 122) Four bits make a nybble, and two nybbles
make a byte. One byte can hold one character, such as ?A? or ???. 1024
bytes make a kilobyte, 1000 kilobytes make a megabyte, 1000 megabytes make
a gigabyte, and 1000 gigabytes make a terabyte. Most personal computers
have approximately eighty or so megabytes of hard drive space and either
two or four

megabytes of RAM on average. Most ROM on PCs is about 256 kilobytes.

Machine language is the way all computer handle instructions-the
simple, one or zero, yes or no, true or false boolean logic necessary for
computers. (Reid 122) Boolean logic was invented by George Boole, a poor
British mathematician in 1815. His new type of logic was mostly ignored
until makers of computers more than a century later realized his was the
ideal system of logic for the computers binary system. Machine code is the
only programming ?language? the computer understands. Unfortunately, the
endless and seemingly random strings of ones and zeros is almost
incomprehensible by humans.

Not long after the computers such as ENIAC came along, programmers
began to develop simple mnemonic ?words? to stand in the place of the
crude machine code. The words still had to be changed into machine code to
be run, though. This simple advancement greatly helped the programmers
with their tasks. Even with these improvements, the process of programming
was still a mind-boggling task.

The so-called high-level languages are the type used for
programming in the 90s. Rarely is there ever a need today for programming
in machine code. The way a high-level language works is by converting the
English-based commands into machine code by way of an Assembler program.
(Constable 122) There are two types of Assembler programs: Compilers and
Interpreters. A compiler converts the entire program into machine code.
The interpreter is only capable of converting one line at a time.

The first compiler language was Fortran. Fortran became quite
popular after its release in 1957 and is still used for some purposes to
this day. Cobol is another high-level compiler language that has been used
widely in the business world from 1960 until now. A compiler must be
utilized before a program can be run. The compiler translates the program
into the ones and zeros of binary machine code. There are many compiler
languages used today, such as C and Pascal, named for the French genius
Blaise Pascal. These two languages are the most popular high-level
languages used for application development.

The interpreter languages are better suited for home computers
than business needs; they are less powerful, but much simpler to use. An
interpreter language is translated into machine code and sent to the
processor one line of code at a time. The first popular interpreter
language was BASIC, or Beginner?s All-purpose Symbolic Instruction Code,
written by John Kemeny and Toms Kurtz at Dartmouth College. BASIC is still
a much-used language, and is included free with many PCs sold today. BASIC
was the first programming language to use the INPUT command, which allows
the user to input information into the program as it is running.
(Constable 29) Another newer and less popular interpreter language is
Hypertalk, a language that is very English-like and easy to understand.
It is included free with every Macintosh computer.

There are advantages and disadvantages to both the compiler and
the interpreter languages. The interpreter languages lack speed; however,
because they compile as they run, they are very easily ?debugged? or fixed
and changed. Before the programmer using a compiler language can try out
his program, he must wait for the compiler to translate his program into
machine code and then change it later. With an interpreter language, on
the other hand, the ease of modification comes with the price of slower
performance and limited capabilities.

The history of computers, the way computers handle information,
and the methods of programming all confirm that computers will never be as
intelligent as the people who will design them.



This essay is only for research purposes. If used, be sure to cite it properly!
 

SixFive

bonswa
Forum Member
Mar 12, 2001
18,728
238
63
53
BG, KY, USA
Name: Anonymous
Submitted: 08.29.01
Flesch-Kincaid Score: 45.649766908623 ?
Word Count: 3006

"This site is hellacious and outstanding!!"

Computers


Only once in a lifetime will a new invention come about to touch every
aspect of our lives. Such a device that changes the way we work, live,
and play is a special one, indeed. A machine that has done all
this and more now exists in nearly every business in the US and one out of
every two households (Hall, 156). This incredible invention is the
computer. The electronic computer has been around for over a
half-century, but its ancestors have been around for 2000 years. However,
only in the last 40 years has it changed the American society. From the
first wooden abacus to the latest high-speed microprocessor,
the computer has changed nearly every aspect of people?s lives for the
better.
The very earliest existence of the modern day computer?s ancestor
is the abacus. These date back to almost 2000 years ago. It is simply a
wooden rack holding parallel wires on which beads are
strung. When these beads are moved along the wire according to
"programming" rules that the user must memorize, all ordinary arithmetic
operations can be performed (Soma, 14). The next innovation in
computers took place in 1694 when Blaise Pascal invented the first
?digital calculating machine?. It could only add numbers and they had to
be entered by turning dials. It was designed to help Pascal?s father who
was a tax collector (Soma, 32).
In the early 1800?s, a mathematics professor named Charles Babbage
designed an automatic calculation machine. It was steam powered and could
store up to 1000 50-digit numbers. Built in to his machine were
operations that included everything a modern general-purpose computer
would need. It was programmed by--and stored data on--cards with holes
punched in them, appropriately called ?punchcards?. His inventions were
failures for the most part because of the lack of precision machining
techniques used at the time and the lack of demand for such a device
(Soma, 46).
After Babbage, people began to lose interest in computers.
However, between 1850 and 1900 there were great advances in mathematics
and physics that began to rekindle the interest (Osborne, 45). Many of
these new advances involved complex calculations and formulas that were
very time consuming for human calculation. The first major use for a
computer in the US was during the 1890 census. Two men, Herman Hollerith
and James Powers, developed a new punched-card system that could
automatically read information on cards without human intervention
(Gulliver, 82). Since the population of the US was increasing so fast,
the computer was an essential tool in tabulating the totals.
These advantages were noted by commercial industries and soon led
to the development of improved punch-card business-machine systems by
International Business Machines (IBM), Remington-Rand, Burroughs, and
other corporations. By modern standards the punched-card machines were
slow, typically processing from 50 to 250 cards per minute, with each card
holding up to 80 digits. At the time, however, punched cards were an
enormous step forward; they provided a means of input, output, and memory
storage on a massive scale. For more than 50 years following their first
use, punched-card machines did the bulk of the world's business computing
and a good portion of the computing work in science (Chposky, 73).
By the late 1930s punched-card machine techniques had become so
well established and reliable that Howard Hathaway Aiken, in collaboration
with engineers at IBM, undertook construction of a large automatic digital
computer based on standard IBM electromechanical parts. Aiken's machine,
called the Harvard Mark I, handled 23-digit numbers and could perform all
four arithmetic operations. Also, it had special built-in programs to
handle logarithms and trigonometric functions. The Mark I was controlled
from prepunched paper tape. Output was by card punch and electric
typewriter. It was slow, requiring 3 to 5 seconds for a multiplication,
but it was fully automatic and could complete long computations without
human intervention (Chposky, 103).
The outbreak of World War II produced a desperate need for
computing capability, especially for the military. New weapons systems
were produced which needed trajectory tables and other essential data.
In 1942, John P. Eckert, John W. Mauchley, and their associates at the
University of Pennsylvania decided to build a high-speed electronic
computer to do the job. This machine became known as ENIAC, for
"Electrical Numerical Integrator And Calculator". It could multiply two
numbers at the rate of 300 products per second, by finding the value of
each product from a multiplication table stored in its memory. ENIAC was
thus about 1,000 times faster than the previous generation of computers
(Dolotta, 47).
ENIAC used 18,000 standard vacuum tubes, occupied 1800 square feet
of floor space, and used about 180,000 watts of electricity. It used
punched-card input and output. The ENIAC was very difficult to program
because one had to essentially re-wire it to perform whatever task he
wanted the computer to do. It was, however, efficient in handling the
particular programs for which it had been designed. ENIAC
is generally accepted as the first successful high-speed electronic
digital computer and was used in many applications from 1946 to 1955
(Dolotta, 50).
Mathematician John von Neumann was very interested in the ENIAC.
In 1945 he undertook a theoretical study of computation that demonstrated
that a computer could have a very simple and yet be able to execute any
kind of computation effectively by means of proper programmed control
without the need for any changes in hardware. Von Neumann came up with
incredible ideas for methods of building and organizing practical, fast
computers. These ideas, which came to be referred to as the
stored-program technique, became fundamental for future generations of
high-speed digital computers and were universally adopted (Hall, 73).
The first wave of modern programmed electronic computers to take
advantage of these improvements appeared in 1947. This group included
computers using random access memory (RAM), which is a memory designed to
give almost constant access to any particular piece of information
(Hall, 75). These machines had punched-card or punched-tape input and
output devices and RAMs of 1000-word capacity. Physically, they were much
more compact than ENIAC: some were about the size of a grand piano and
required 2500 small electron tubes. This was quite an improvement over
the earlier machines. The first-generation stored-program computers
required considerable maintenance, usually attained 70% to 80% reliable
operation, and were used for 8 to 12 years. Typically, they were
programmed directly in machine language, although by the mid-1950s
progress had been made in several aspects of advanced programming. This
group of machines included EDVAC and UNIVAC, the first commercially
available computers (Hazewindus, 102).
The UNIVAC was developed by John W. Mauchley and John Eckert, Jr.
in the 1950?s. Together they had formed the Mauchley-Eckert Computer
Corporation, America?s first computer company in the 1940?s. During the
development of the UNIVAC, they began to run short on funds and sold their
company to the larger Remington-Rand Corporation. Eventually they built a
working UNIVAC computer. It was delivered to the US Census Bureau in 1951
where it was used to help tabulate the US population (Hazewindus, 124).
Early in the 1950s two important engineering discoveries changed
the electronic computer field. The first computers were made with vacuum
tubes, but by the late 1950?s computers were being made out of
transistors, which were smaller, less expensive, more reliable, and more
efficient (Shallis, 40). In 1959, Robert Noyce, a physicist at the
Fairchild Semiconductor Corporation, invented the integrated circuit, a
tiny chip of silicon that contained an entire electronic circuit. Gone
was the bulky, unreliable, but fast machine; now computers began to become
more compact, more reliable and have more capacity (Shallis, 49).
These new technical discoveries rapidly found their way into new
models of digital computers. Memory storage capacities increased 800% in
commercially available machines by the early 1960s and speeds increased by
an equally large margin. These machines were very expensive to purchase
or to rent and were especially expensive to operate because of the cost of
hiring programmers to perform the complex operations the computers ran.
Such computers were typically found in large computer centers--operated by
industry, government, and private laboratories--staffed with many
programmers and support personnel (Rogers, 77). By 1956, 76 of IBM?s
large computer mainframes were in use, compared with only 46 UNIVAC?s
(Chposky, 125).
In the 1960s efforts to design and develop the fastest possible
computers with the greatest capacity reached a turning point with the
completion of the LARC machine for Livermore Radiation Laboratories by the
Sperry-Rand Corporation, and the Stretch computer by IBM. The LARC had a
core memory of 98,000 words and multiplied in 10 microseconds. Stretch was
provided with several ranks of memory having slower access for the ranks
of greater capacity, the fastest access time being less than 1
microseconds and the total capacity in the vicinity of 100 million words
(Chposky, 147).
During this time the major computer manufacturers began to offer a
range of computer capabilities, as well as various computer-related
equipment. These included input means such as consoles and card feeders;
output means such as page printers, cathode-ray-tube displays, and
graphing devices; and optional magnetic-tape and magnetic-disk file
storage. These found wide use in business for such applications as
accounting, payroll, inventory control, ordering supplies, and billing.
Central processing units (CPUs) for such purposes did not need to be very
fast arithmetically and were primarily used to access large amounts of
records on file. The greatest number of computer systems were delivered
for the larger applications, such as in hospitals for keeping track of
patient records, medications, and treatments given. They were also used in
automated library systems and in database systems such as the Chemical
Abstracts system, where computer records now on file cover nearly al!
l known chemical compounds (Rogers
, 98).
The trend during the 1970s was, to some extent, away from
extremely powerful, centralized computational centers and toward a broader
range of applications for less-costly computer systems. Most
continuous-process manufacturing, such as petroleum refining and
electrical-power distribution systems, began using computers of relatively
modest capability for controlling and regulating their activities. In the
1960s the programming of applications problems was an obstacle to the
self-sufficiency of moderate-sized on-site computer installations, but
great advances in applications programming languages removed these
obstacles. Applications languages became available for controlling a
great range of manufacturing processes, for computer operation of machine
tools, and for many other tasks (Osborne, 146). In 1971 Marcian E. Hoff,
Jr., an engineer at the Intel Corporation, invented the microprocessor and
another stage in the development of the computer began (Shallis, 121).
A new revolution in computer hardware was now well under way,
involving miniaturization of computer-logic circuitry and of component
manufacture by what are called large-scale integration techniques. In the
1950s it was realized that "scaling down" the size of electronic digital
computer circuits and parts would increase speed and efficiency and
improve performance. However, at that time the manufacturing methods were
not good enough to accomplish such a task. About 1960 photo printing of
conductive circuit boards to eliminate wiring became highly developed.
Then it became possible to build resistors and capacitors into the
circuitry by photographic means (Rogers, 142). In the 1970s entire
assemblies, such as adders, shifting registers, and counters, became
available on tiny chips of silicon. In the 1980s very large scale
integration (VLSI), in which hundreds of thousands of transistors are
placed on a single chip, became increasingly common. Many companies, some
new to !
the computer field, introduced in
the 1970s programmable minicomputers supplied with software packages. The
size-reduction trend continued with the introduction of personal
computers, which are programmable machines small enough and inexpensive
enough to be purchased and used by individuals (Rogers, 153).
One of the first of such machines was introduced in January 1975.
Popular Electronics magazine provided plans that would allow any
electronics wizard to build his own small, programmable computer for
about $380 (Rose, 32). The computer was called the ?Altair 8800?. Its
programming involved pushing buttons and flipping switches on the front of
the box. It didn?t include a monitor or keyboard, and its applications
were very limited (Jacobs, 53). Even though, many orders came in for it
and several famous owners of computer and software manufacturing companies
got their start in computing through the Altair.
For example, Steve Jobs and Steve Wozniak, founders of Apple Computer,
built a much cheaper, yet more productive version of the Altair and turned
their hobby into a business (Fluegelman, 16).
After the introduction of the Altair 8800, the personal computer
industry became a fierce battleground of competition. IBM had been the
computer industry standard for well over a half-century. They held their
position as the standard when they introduced their first personal
computer, the IBM Model 60 in 1975 (Chposky, 156). However, the newly
formed Apple Computer company was releasing its own personal computer, the
Apple II (The Apple I was the first computer designed by Jobs and Wozniak
in Wozniak?s garage, which was not produced on a wide scale). Software
was needed to run the computers as well. Microsoft developed a Disk
Operating System (MS-DOS) for the IBM computer while Apple developed its
own software system (Rose, 37). Because Microsoft had now set the
software standard for IBMs, every software manufacturer had to make their
software compatible with Microsoft?s. This would lead to huge profits for
Microsoft (Cringley, 163).
The main goal of the computer manufacturers was to make the
computer as affordable as possible while increasing speed, reliability,
and capacity. Nearly every computer manufacturer accomplished this and
computers popped up everywhere. Computers were in businesses keeping
track of inventories. Computers were in colleges aiding students in
research. Computers were in laboratories making complex calculations at
high speeds for scientists and physicists. The computer had made its mark
everywhere in society and built up a huge industry (Cringley, 174).
The future is promising for the computer industry and its technology. The
speed of processors is expected to double every year and a half in the
coming years. As manufacturing techniques are further perfected the
prices of computer systems are expected to steadily fall. However, since
the microprocessor technology will be increasing, it?s higher costs will
offset the drop in price of older processors. In other words, !
the price of a new computer will s
tay about the same from year to year, but technology will steadily
increase (Zachary, 42)
Since the end of World War II, the computer industry has grown
from a standing start into one of the biggest and most profitable
industries in the United States. It now comprises thousands of companies,
making everything from multi-million dollar high-speed super computers to
printout paper and floppy disks. It employs millions of people and
generates tens of billions of dollars in sales each year (Malone, 192).
Surely, the computer has impacted every aspect of people?s lives. It has
affected the way people work and play. It has made everyone?s life easier
by doing difficult work for people. The computer truly is one of the most
incredible inventions in history.
Works Cited

Chposky, James. Blue Magic. New York: Facts on File Publishing. 1988.
Cringley, Robert X. Accidental Empires. Reading, MA: Addison Wesley
Publishing, 1992.
Dolotta, T.A. Data Processing: 1940-1985. New York: John Wiley & Sons,
1985.
Fluegelman, Andrew. ?A New World?, MacWorld. San Jose, Ca: MacWorld
Publishing, February, 1984 (Premire Issue).
Hall, Peter. Silicon Landscapes. Boston: Allen & Irwin, 1985
Gulliver, David. Silicon Valey and Beyond. Berkeley, Ca: Berkeley Area
Government Press, 1981.
Hazewindus, Nico. The U.S. Microelectronics Industry. New York:
Pergamon Press, 1988.
Jacobs, Christopher W. ?The Altair 8800?, Popular Electronics. New
York: Popular Electronics Publishing, January 1975.
Malone, Michael S. The Big Scare: The U.S. Computer Industry. Garden
City, NY: Doubleday & Co., 1985.
Osborne, Adam. Hypergrowth. Berkeley, Ca: Idthekkethan Publishing
Company, 1984.
Rogers, Everett M. Silicon Valey Fever. New York: Basic Books, Inc.
Publishing, 1984.
Rose, Frank. West of Eden. New York: Viking Publishing, 1989.
Shallis, Michael. The Silicon Idol. New York: Shocken Books, 1984.
Soma, John T. The History of the Computer. Toronto: Lexington Books,
1976.
Zachary, William. ?The Future of computing?, Byte. Boston: Byte
Publishing, August 1994.



This essay is only for research purposes. If used, be sure to cite it properly!
 

SixFive

bonswa
Forum Member
Mar 12, 2001
18,728
238
63
53
BG, KY, USA
Name: Anonymous
Submitted: 08.29.01
Flesch-Kincaid Score: 46.748280212737 ?
Word Count: 3131

"This site is hellacious and outstanding!!"

Designing a Network


I. STATEMENT & BACKGROUND

The college of Business (COB) server is now being used to support deliver to the Computer Information System (CIS) department. The CIS professors would be using the server for various operations. Assignments, e-mail, and other types of information would be easier for the students to access. Network users are able to share files, printers and other resources; send electronic messages and run programs on other computers. However, certain important issues need to be addressed and concentrated on. In order to begin the process of setting up the COB server, the total numbers of users (faculty and students) must be determined. Some other significant factors to be approached are: the required software applications needed on the network, an efficient and appropriate directory structure and effective security structure. In designing the directory structure, the major focus must be on accessibility. The number of undergraduate CIS courses that the server will be used for is be!
tween 15 and 17. For the users to be ensured that their information is not at risk, we will create an effective security structure. In composing the appropriate security structure there must be certain access rights assigned to the users. An important technical detail in setting up a server is the amount of money that will need to be allocated for the restructuring of the system. For the system to function properly, the amount of hardware/ software will need to be determined.

II. FUNCTIONAL REQUIREMENTS

The COB server will primarily be used by CIS professors and CIS students. The approximate number of professors in the CIS department is between five and seven and the approximate number of CIS majors is between 100 and 120. As computer technology continues to grow, the number of CIS majors is vastly increasing. If we see a considerable rise in Computer Information Systems majors the department will have to expand its faculty members. The CIS professors will be using the server to disburse their syllabi, distribute specific assignments and send e-mail to their students. The layout, design and complexity of each class will determine how much the professor may be using the server.
The first class a CIS major usually takes at Western is CIS 251. Management Information Systems (CIS 251). This class offers students a basis for management information systems in business organizations. In putting the COB server to use and getting the student ready for hands-on knowledge of computer-based information systems, CIS 251 focuses on analysis, development, design, implementation, and evaluation. Other tasks that are covered in this class are computer applications ins spreadsheets, word processors, and database systems. Information systems affect both business people and people who live in society.
The first programming class CIS majors take is CIS 256. This CIS course will be very beneficial for the server. Business Computer Programming (CIS 256) introduces the student to an application of programming principle in business. Detailed assignments involve flowcharting, coding, documentation, and testing. This course provides the student with a background of computer architecture and data representation. This class account will require the BASIC programming language that will be used as well as the compiler.
The CIS elective, CIS 301, emphasizes maximum "hands-on" experience with microcomputers and software packages, including word processing, spreadsheets, database managers, and graphic systems. Microcomputer Applications (CIS 301), is an important course for students not majoring in Computer Information Systems, but would like to familiarize themselves with the personal computer. This account will contain Microsoft Office and e-mail capabilities.
An important class that becomes useful for the server is the CIS 358 course. The professor can send applications, reports, programs and other data to the server where the student can transfer to a disk or their VAX account. Applications Development II (CIS 358) is a study of the state of art tools and techniques for developing complex business applications; data organization, on-line processing, software engineering, and software maintenance. This CIS class is an extension to CIS 258. The student will expand his/her knowledge of the COBOL programming language. In order for the CIS major to apply principle of good application design and solving problems, the Visual Basic programming language will also be introduced. The account for these two classes will contain the COBOL programming language and the compiler for it as well as Visual Basic.
For the students to learn more about client-server technology, CIS 365 is required to the Computer Information Systems curriculum. The student will be involved in learning about different types of client-server environment such as configuring Worldwide Web environment and building a Netware LAN to support delivery client-server computing. Computer Architecture, Communications, and Operating Systems (CIS 365) focuses on the architecture of modern computer systems including peripherals; data communications networking with fault tolerant computing; language transition; operating systems software/hardware and utilities. This account will have internet connections and Netware operations.
In studying Database Management Systems (CIS 453), the CIS student will learn the role of databases, database applications, data modeling using entity-relationship and semantic object models. The significance of the COB server for CIS 453 is that the student will focus on multi-user database processing on LANs with the emphasis on client-server systems. In this database class, students will also be required to design and implement a database using the current technology. This account will require Microsoft Access and Salsa.
To familiarize the CIS major with systems development, CIS 455 is required by the curriculum. This class introduces the student with cost/benefit justification; software design; implementation and maintenance procedures; quality assurance; and integration of information systems into management decision-making processes. Computer Information Systems Analysis and Design (CIS 455) will require that a student design an appropriate computer system for a specific company or business. The account for this class will contain Microsoft Office and will have internet connections.
The last class that is required for in the CIS core is CIS 465. In this course, the focal point is to strategically use information systems in the business environment. Information Resource Management (CIS 465) centers on responsibility and accountability of information resource managers; security, legal, and ethical issues; procurement and supervision of resources and resource assessment. This class will have Visual/IFPS Plus as well as Internet capabilitites.
III. Technical Design

Local area networks (LANs) could be thought of as pockets of coordinated computing within a small geographic area. The network has three layers of components: Application software, network software, and network hardware. Application software that will be used will consists of computer programs that interface with network users and permit the sharing of information, such as files, graphics, and video, and resources, such as printers and disks. The type of application software that will be used is called client-server. Client computers send requests for information or requests to use resources to other computers, called servers, that control data and applications. The network software to be used will consists of computer programs that establish protocols, or rules, for computers to talk to one another. These protocols are carried out by sending and receiving formatted instructions of data called packets. Protocols make logical connections between network applications, d!
irect movement through the physical network, and minimize the possibility of collisions between packets sent at the same time. Network hardware is made up of the physical components that connect computers. Two important components that will carry the computer?s signals will be wires or fiber-optic cables, and the network adapter, which will access the physical media that links the computers, receives packets from the network software, and transmits instructions and requests to other computers. Transmitted information is in the form of binary digits, or bits which the electronic circuitry can process.
The new local area network (LAN) that we are proposing to design will only be a one volume server. The directory structure for this server will go as follows: There will be a system directory where the queue holds and services the print jobs prior to being printed. A login will be established to activate and open a session to the Network Operating System for a user. The DOS applications available to the public will be Word Perfect, Excel, Power Point, and Lotus 1-2-3. A mail directory will be created for users to be able to send e-mail and also retrieve it. The users of this directory structure will be focused around the faculty which will be Heinrichs, Perry, Banerjee, Clapper, and Carland. The faculty will have the rights to the classes that are taught here at Western Carolina University. These classes will also be used by the students of the Computer Information Systems program. The applications that will be used by the students and faculty of CIS will be Salsa, CO!
BOL, Visual Basic, Database applications, Basic, and Visual/IFPS Plus. In these courses faculty can assign programs or assignments to the students and all they have to do is go to the appropriate class that they are in and get the homework that is do for that certain class.
The medium used to transmit information will limit the speed of the network, the effective distance between computer, and the network topology. The coaxial cable will provide transmission speeds of a few thousand bits per second for long distances and about one-hundred million bits per second (Mbps) for shorter distances.
The type of topology that will be used to arrange computers in this network will be the bus topology. The bus topology is composed of a single link connected to many computers. All computers on this common connection receive all signals transmitted by any attached computer. Local area networks which connect separated by short distances, such as in an office or a university campus, commonly use a bus topology. Twisted pair, for slow speed LANs, will be the cabling of these computers. Here, the main cable is typically a shielded twisted pair (like phone lines). The board is attached to a TAP via three cables then the tap is connected to the twisted pair again at three points. An active hub will connect up to eight PCs and workstations. Each PC or work station can be up from two thousand feet from the active hub. Each port of the active hub will be electrically isolated and will not need terminators for unused ports.
Typically a LAN has a server node to provide certain services to the LAN users. In this case of a small scale PC LAN, the server is attached to a laser printer, so that all users can share that printer through the server. Another use of the server is that if the LAN users need to get some updated files. Instead of copying to all the nodes each of them can copy / share from the server, where only once those files can be loaded or updated.
The Network security structure would not be a very complicated. The Supervisor would be granted full access to all the resources in the CIS program. Students who are a CIS major will have read, copy and write capabilities for the classes they will attend. The Public accounts will only have the right to be able to access the rights to Word Perfect, Excel, Power Point, etc. The Faculty will also have rights to the classes with read, copy, write and send.
Networks are subject to hacking, or illegal access, so shared files and resources must be protected. A network intruder could easedrop on packets being sent across network or send fictitious messages. For important information, data encryption (scrambling data using mathematical equations) renders captured packets unreadable to an intruder. This server will use an authentication scheme to ensure that a request to read or write files or to use resources is from a legitimate client (faculty or CIS majors) and not from an intruder. The system will have a security measure of telling whether or not the user is a CIS major or not by given each CIS major and faculty a code or password.. The CIS majors will be given a code in which they will have to enter in every time he or she gets to the computer and wants information from a CIS class. Every time the student enters in the code the computer will keep it in memory so if the same password is entered somewhere else the person wil!
l not be allowed in. This station restricitions will keep students from going in and messing around with the students information while that CIS student is working. There will be disk restrictions to assure that storage space is evenly allocated. The CIS users will also have to change the password every now and then to keep confidentiality of his or her passwords. This will put an account to have an expiration date to it so that the user will have to change his or her password as the semester goes on to insure the security of their account.
Under no circumstance should an administrator put an entire system at risk for the convenience of a few users. Certain measures and precautions should be implemented to ensure that the network will operate effectively and effeciently.
Another major concern when designing a system is to anticipate the addition of more workstations and eventually more users. By considering this now many problems can be solved even before they exist. If there is room allotted for expansion in the beginning, then actually implementing the new ideas and hardware should be simple. Assumptions about how large
the system will actually get and how many users it will accomadate are very serious issues that need to be addressed in the utmost fashion. These questions require serious answers that if not dealt with could destroy a system.
Another key issue that needs to be addressed is who will be issued an account on the system. Certainly each CIS faculty member will have his or her own personal account. In these accounts some items such as personal research materials and grades will reside. Then there is the matter of the individual CIS classes and individual CIS students. Logically each class will have a separate account because the information in each account will be different (applications etc.). The main point of concern is the applications involved with each class. Using Visual Basic and Visual/IFPS Plus, having a COBOL compiler to run your programs on and so on.
CIS students will have their own personal account. A space will reserved for them to execute e-mail and other personal things. They will need to have a good understanding of the network to be able change their
directory to the class that they need to locate and do their work in. Each faculty member will have their own account as well. They will be able to send e-mail to students and also put homework in the accounts of the classes that they teach. Other faculty members will not have access to the server. As stated before the main purpose of the server is to deliver CIS information only and for the CIS discipline only.
The main points of concern when dealing with the printer configuration are reliability and accessibility. Reliability is centered around quality and effeciency. Top quality network printers are expensive but sometimes are not the best choice. Speed of output, such as papers per minute, play a big role in choosing a network printer. Printers that are easy to get to and easy to service are a key to a successful network. I personally can not stand to walk into a lab and have to hunt where the printers are and have to wait for someone to remove a jammed paper. The lab on the second floor of the Belk building is a good example. An excellent example of a good configuration is in Forsyth. The printers are easily seen and easily worked on. The printers separate the two main islands of workstations which allow for effecient management.
This system will be of considerable size and area. It will require constant monitoring and any on-line maintenance will be in the form of a supervisor or network administrator. This designated person or person?s will need to be very knowledgable in all the system?s hardware and software. For example CAN certified would be an excellent standard for consideration. The person or person?s would have to be a full time faculty member in the College of Business. I feel that having a daily interaction with the system and the users would prove to be very helpful in comparison to having someone called in to diagnose and solve the problems. Outside consultants are usually expensive and are most of the time are not worth it.
The load placed upon the system will vary at times. Classes are going to have a conflict in assignment due dates and everyone is going to rush to the lab to finish their assignments. However I think that most of the time there will be a slight to moderate load placed on the system. Most students bounce in to check their mail or to send a quick message anyway. Sitting down and writing a program in one session is impossible any, so that will reduce the load in itself.
Login scripts for each user need to be simple. Allowing students to write their own should not even be considered. Each student should have the same format and be placed at the same starting point each time that they login. Alloting a specific number of search drives and network drivers would definitly reduce problems. Students should be required to change their passwords periodically. The system login scripts could execute certain commands for each different users, faculty and students. These are just a few areas within the entire Technical Design process that require a serious answer.
Directory Structure


SYSTEM


LOGIN
Word Perfect
Excel
PUBLIC------ Power Point
Binder

SYS:-- MAIL
FACULTY
USERS------------------------
CIS STUDENTS----- CIS256
COBOL CIS 258
APPS---- VB CIS 258
Salsa CIS 358
Database CIS 365
DATA CIS 453
CIS 455
CIS 465







Directory Structure


SYSTEM


LOGIN
Word Perfect
Excel
PUBLIC------ Power Point
Binder

SYS:-- MAIL
FACULTY
USERS------------------------
CIS STUDENTS----- CIS256
COBOL CIS 258
APPS---- VB CIS 258
Salsa CIS 358
Database CIS 365
DATA CIS 453
CIS 455
CIS 465



This essay is only for research purposes. If used, be sure to cite it properly!
 

THE KOD

Registered
Forum Member
Nov 16, 2001
42,497
260
83
Victory Lane
Infrared Light
In order to understand night vision, it is important to understand something about light. The amount of energy in a light wave is related to its wavelength: Shorter wavelengths have higher energy. Of visible light, violet has the most energy, and red has the least. Just next to the visible light spectrum is the infrared spectrum.


Infrared light is a small part of the light spectrum.



Infrared light can be split into three categories:

Near-infrared (near-IR) - Closest to visible light, near-IR has wavelengths that range from 0.7 to 1.3 microns, or 700 billionths to 1,300 billionths of a meter.
Mid-infrared (mid-IR) - Mid-IR has wavelengths ranging from 1.3 to 3 microns. Both near-IR and mid-IR are used by a variety of electronic devices, including remote controls.
Thermal-infrared (thermal-IR) - Occupying the largest part of the infrared spectrum, thermal-IR has wavelengths ranging from 3 microns to over 30 microns.
The key difference between thermal-IR and the other two is that thermal-IR is emitted by an object instead of reflected off it. Infrared light is emitted by an object because of what is happening at the atomic level.

Atoms
Atoms are constantly in motion. They continuously vibrate, move and rotate. Even the atoms that make up the chairs that we sit in are moving around. Solids are actually in motion! Atoms can be in different states of excitation. In other words, they can have different energies. If we apply a lot of energy to an atom, it can leave what is called the ground-state energy level and move to an excited level. The level of excitation depends on the amount of energy applied to the atom via heat, light or electricity.

An atom consists of a nucleus (containing the protons and neutrons) and an electron cloud. Think of the electrons in this cloud as circling the nucleus in many different orbits. Although more modern views of the atom do not depict discrete orbits for the electrons, it can be useful to think of these orbits as the different energy levels of the atom. In other words, if we apply some heat to an atom, we might expect that some of the electrons in the lower energy orbitals would transition to higher energy orbitals, moving farther from the nucleus.


An atom has a nucleus and an electron cloud.



Once an electron moves to a higher-energy orbit, it eventually wants to return to the ground state. When it does, it releases its energy as a photon -- a particle of light. You see atoms releasing energy as photons all the time. For example, when the heating element in a toaster turns bright red, the red color is caused by atoms excited by heat, releasing red photons. An excited electron has more energy than a relaxed electron, and just as the electron absorbed some amount of energy to reach this excited level, it can release this energy to return to the ground state. This emitted energy is in the form of photons (light energy). The photon emitted has a very specific wavelength (color) that depends on the state of the electron's energy when the photon is released.

Anything that is alive uses energy, and so do many inanimate items such as engines and rockets. Energy consumption generates heat. In turn, heat causes the atoms in an object to fire off photons in the thermal-infrared spectrum. The hotter the object, the shorter the wavelength of the infrared photon it releases. An object that is very hot will even begin to emit photons in the visible spectrum, glowing red and then moving up through orange, yellow, blue and eventually white. Be sure to read How Light Bulbs Work, How Lasers Work and How Light Works for more detailed information on light and photon emission.

In night vision, thermal imaging takes advantage of this infrared emission. In the next section, we'll see just how it does this.
 

SixFive

bonswa
Forum Member
Mar 12, 2001
18,728
238
63
53
BG, KY, USA
I'm not going to bash anybody in here, so I thought I would just post some interesting reading. Hope all you truthers enjoy. There's some really great stuff!

moonbat.jpg
 

THE KOD

Registered
Forum Member
Nov 16, 2001
42,497
260
83
Victory Lane
Words of Thanks


Because this was my first CD, I wanted to keep it short, clean, and simple. The downside of this choice, however, was that I could not really give an in-depth thanks to everyone who has helped me along on this jounrey. So this page is to them.


Jesus: For being such a wonderful God, and loving Father. I know that I would be lying to say that these songs are my own. Because they are not mine, they are Yours, they come from You, and are played for You, for without You I would be nothing.

My Family: You have supported me every since I first began to tinker on the piano at 12. I don't know where I would be in life if you all did not love me with all you had. You have inspired me in my spiritual walk, and cared for me in the physical. Words could not express my gratitude to you.


Bobby "Big Daddy" Hill: I cannot even begin to express how much of a blessing you have been, as a contributor of this gift, and a grandfather. You have been there with me the whole time, supporting me with equipment and encouraging me. Thank you!


Will Ackerman: Wow, what an experience to go through at my age. But you have not only helped me take a great big step in this gift, but you made it a very fun and "educational" experience. I don't think I could really fully express how much of a blessing you have been to a young man beginning on a journey in life. And not only have you blessed this gift as a whole, but you have taken each individual song and really lifted it to a whole new level. It's funny, when I'm making up a new song, my parents will say "All that's left now is to send it through a Will Ackerman 'wash'." You really know what your doing on how to bring out the best in someone's music, and on top of it all, your own music has been a blessing in and of itself. Thank You.

www.williamackerman.com


Corin Nelson: You and Will are crazy (in an funny way). And both of you made the trip up to that cold state called Vermont a very enjoyable one. Your engineering skills are quite amazing, and your patience with each song awesome. I know I would have gone crazy listening to them over and over and over again, but you made each one sound great. Cheers!


Carol (Me-Mom) Morris and Tory Porch: Thank you so much, both of you, for your help with the photography, you both have been such a help during this project. Thank you for supporting me, and just being a great help in every way you can! And thank you Me-Mom for the beautiful photos that cover the CD, they are just breathtaking.

Keiko Guest and Tory Porch's Website:
www.keikoguestphotography.com



Stanton Lanier: Thank you so much Stanton for all the help you have been! It has been such a blessing, and encourgment for another Christian to have gone before me, and be able to help me out, give me heads up, and the like. Thank you.

www.stantonlanier.com


And I would like to make one last thanks to all of my friends and "family" for being so supportive during this project. Thank you Pattersons for taking the time to listen and give feedback on those little things I fancy as "songs", and encouraging me, you don't know how much it meant to me. Thank you Jessica Engel for building me up, and being a constant source of encouragement all the way. And finally I would like to thank all of my other friends for just being my friends, that in and of itself is a blessing. Thank You everyone!


And finally, since He is the first and the last, I would like to thank Jesus again, for just being plain amazing.

























































Photos by K. Ryan Brown and Carol Morris
 

THE KOD

Registered
Forum Member
Nov 16, 2001
42,497
260
83
Victory Lane
I'm not going to bash anybody in here, so I thought I would just post some interesting reading. Hope all you truthers enjoy. There's some really great stuff!

moonbat.jpg

...............................................................

Yeh I agree.

Alot of stuff that can be researched if you really get interested and have some extra time on your hands.
 
Bet on MyBookie
Top