Monday, September 30, 2019

The History of Computer Technology

Only once in a lifetime will a new invention come about to touch every aspect of our lives. Such devices changed the way we manage, work, and live. A machine that has done all this and more now exists in nearly every business in the United States. This incredible invention is the computer. The electronic computer has been around for over a half-century, but its ancestors have been around for 2000 years. However, only in the last 40 years has the computer changed American management to it†s greatest extent. From the first wooden abacus to the latest high-speed microprocessor, the computer has changed nearly every spect of management, and our lives for the better. The very earliest existence of the modern day computer's ancestor is the abacus. These date back to almost 2000 years ago (Dolotta, 1985). It is simply a wooden rack holding parallel wires on which beads are strung. When these beads are moved along the wire according to programming rules that the user must memorize. All ordinary arithmetic operations can be performed on the abacus. This was one of the first management tools used. The next innovation in computers took place in 1694 when Blaise Pascal invented the first digital calculating machine. It could only add numbers and they had to be entered by turning dials. It was designed to help Pascal's father, who was a tax collector, manage the town†s taxes (Beer, 1966). In the early 1800s, a mathematics professor named Charles Babbage designed an automatic calculation machine (Dolotta, 1985). It was steam powered and could store up to 1000 50-digit numbers. Built in to his machine were operations that included everything a modern general-purpose computer would need. It was programmed by and stored data on cards with holes punched in them, appropriately called punch cards. This machine was xtremely useful to managers that delt with large volumes of good. With Babbage†s machine, managers could more easily calculate the large numbers accumulated by inventories. The only problem was that there was only one of these machines built, thus making it difficult for all managers to use (Beer, After Babbage, people began to lose interest in computers. However, between 1850 and 1900 there were great advances in mathematics and physics that began to rekindle the interest. Many of these new advances involved complex calculations and formulas that were very time consuming for human calculation. The first major use for a computer in the U. S. was during the 1890 census. Two men, Herman Hollerith and James Powers, developed a new punched-card system that could automatically read information on cards without human (Dolotta, 1985). Since the population of the U. S. as increasing so fast, the computer was an essential tool for managers in tabulating the These advantages were noted by commercial industries and soon led to the development of improved punch-card business-machine systems by International Business Machines, Remington-Rand, Burroughs, and other corporations (Chposky, 1988). By modern standards the unched-card machines were slow, typically processing from 50 to 250 cards per minute, with each card holding up to 80 digits. At the time, however, punched cards were an enormous step forward; they provided a means of input, output, and memory storage on a massive scale. For more than 50 years following their first use, punched-card machines did the bulk of the world's business computing By the late 1930s punched-card machine techniques had become so well established and reliable that Howard Hathaway Aiken, in collaboration with engineers at IBM, undertook construction of a large automatic digital computer ased on standard IBM electromechanical parts (Chposky, 1988). Aiken's machine, called the Harvard Mark I, handled 23-digit numbers and could perform all four arithmetic operations (Dolotta, 1985). Also, it had special built-in programs to handled logarithms and trigonometric functions. The Mark I was controlled from prepunched paper tape. Output was by card punch and electric typewriter. It was slow, requiring 3 to 5 seconds for a multiplication, but it was fully automatic and could complete long computations The outbreak of World War II produced a desperate need for computing capability, especially for the military (Dolotta, 985). New weapons systems were produced which needed trajectory tables and other essential data. In 1942, John P. Eckert, John W. Mauchley, and their associates at the University of Pennsylvania decided to build a high-speed electronic computer to do the job. This machine became known as ENIAC, for Electrical Numerical Integrator And Calculator (Chposky, 1988). It could multiply two numbers at the rate of 300 products per second, by finding the value of each product from a multiplication table stored in its memory. ENIAC was thus about 1,000 times faster than the previous generation of computers. ENIAC used 18,000 standard vacuum tubes, occupied 1800 square feet of floor space, and used about 180,000 watts of electricity. It used punched-card input and output. The ENIAC was very difficult to program because one had to essentially re-wire it to perform whatever task he wanted the computer to do. It was efficient in handling the particular programs for which it had been designed. ENIAC is generally accepted as the first successful high-speed electronic digital computer and was used in many applications from 1946 to 1955. However, the ENIAC was not accessible to managers of businesses Mathematician John Von Neumann was very interested in the ENIAC. In 1945 he undertook a theoretical study of computation that demonstrated that a computer could have a very simple and yet be able to execute any kind of computation effectively by means of proper programmed control without the need for any changes in hardware. Von Neumann came up with incredible ideas for methods of building and organizing practical, fast computers. These ideas, which came to be referred to as the stored-program technique, became fundamental for future generations of high-speed digital computers and were universally adopted The first wave of modern programmed electronic computers to take advantage of these improvements appeared in 1947. This group included computers using random access memory, RAM, which is a memory designed to give almost constant access to any particular piece of information (Dolotta, 1985). These machines had punched-card or punched-tape input and output devices and RAMs of 1000-word capacity. Physically, they were much more compact than ENIAC: some were about the size of a grand piano and required 2500 small electron tubes. This was quite an improvement over the earlier machines. The first-generation stored-program computers required considerable maintenance, usually attained 70% to 80% reliable operation, and were used for 8 to 12 years (Hazewindus,1988). Typically, they were programmed directly in machine language, although by the mid-1950s progress had been made in several aspects of advanced programming. This group of machines included EDVAC and UNIVAC, the first commercially available computers. With this invention, managers had even more power to perform calculations for such things as statistical demographic data (Beer, 1966). Before this time, it was very rare for a anager of a larger business to have the means to process The UNIVAC was developed by John W. Mauchley and John Eckert, Jr. in the 1950s. Together they had formed the Mauchley-Eckert Computer Corporation, America's first computer company in the 1940s. During the development of the UNIVAC, they began to run short on funds and sold their company to the larger Remington-Rand Corporation. Eventually they built a working UNIVAC computer. It was delivered to the U. S. Census Bureau in 1951 where it was used to help tabulate the U. S. population Early in the 1950s two important engineering discoveries hanged the electronic computer field. The first computers were made with vacuum tubes, but by the late 1950s computers were being made out of transistors, which were smaller, less expensive, more reliable, and more efficient (Dolotta, 1985). In 1959, Robert Noyce, a physicist at the Fairchild Semiconductor Corporation, invented the integrated circuit, a tiny chip of silicon that contained an entire electronic circuit. Gone was the bulky, unreliable, but fast machine; now computers began to become more compact, more reliable and have more capacity. These new technical discoveries rapidly found their way into new odels of digital computers. Memory storage capacities increased 800% in commercially available machines by the early 1960s and speeds increased by an equally large margin (Jacobs, 1975). These machines were very expensive to purchase or to rent and were especially expensive to operate because of the cost of hiring programmers to perform the complex operations the computers ran. Such computers were typically found in large computer centers operated by industry, government, and private laboratories staffed with many programmers and support personnel. By 1956, 76 of IBM's large computer mainframes were in se, compared with only 46 UNIVAC's (Chposky, 1988). In the 1960s efforts to design and develop the fastest possible computers with the greatest capacity reached a turning point with the completion of the LARC machine for Livermore Radiation Laboratories by the Sperry-Rand Corporation, and the Stretch computer by IBM. The LARC had a core memory of 98,000 words and multiplied in 10 microseconds. Stretch was provided with several ranks of memory having slower access for the ranks of greater capacity, the fastest access time being less than 1 microseconds and the total capacity in the vicinity of 100 During this time the major computer manufacturers began to offer a range of computer capabilities, as well as various computer-related equipment (Jacobs, 1975). These included input means such as consoles and card feeders; output means such as page printers, cathode-ray-tube displays, and graphing devices; and optional magnetic-tape and magnetic-disk file storage. These found wide use in management for such applications as accounting, payroll, inventory control, ordering supplies, and billing. Central processing units for such purposes did not need to be very fast arithmetically and were primarily used to access arge amounts of records on file. The greatest number of computer systems were delivered for the larger applications, such as in hospitals for keeping track of patient records, medications, and treatments given. They were also used in automated library systems and in database systems such as the Chemical Abstracts system, where computer records now on file cover nearly all known chemical compounds The trend during the 1970s was, to some extent, away from extremely powerful, centralized computational centers and toward a broader range of applications for less-costly computer systems (Jacobs, 1975). Most continuous-process manufacturing, such as petroleum refining and electrical-power distribution systems, began using computers of relatively modest capability for controlling and regulating their activities. In the 1960s the programming of applications problems was an obstacle to the self-sufficiency of moderate-sized on-site computer installations, but great advances in applications programming languages removed Applications languages became available for controlling a great range of manufacturing processes, for computer operation of machine tools, and for many other tasks. In 1971 Marcian E. Hoff, Jr. , an engineer at the Intel Corporation, invented the microprocessor and another stage in the development of the computer began (Chposky, 1988). A new revolution in computer hardware was now well under way, involving miniaturization of computer-logic circuitry and of component manufacture by what are called large-scale In the 1950s it was realized that scaling down the size of electronic digital computer circuits and parts would increase speed and efficiency and improve performance (Jacobs, 1975). However, at that time the manufacturing methods were not good enough to accomplish such a task. About 1960, photoprinting of conductive circuit boards to eliminate wiring became highly developed. Then it became possible to build resistors and capacitors into the circuitry by photographic means. In the 1970s entire assemblies, such as adders, shifting registers, and counters, became available on tiny chips of silicon. In the 1980s very large scale integration, VLSI, in which hundreds of thousands of transistors are placed on a single chip, became increasingly common Many companies, some new to the computer field, introduced in the 1970s programmable minicomputers upplied with software packages (Jacobs, 1975). The size-reduction trend continued with the introduction of personal computers, which are programmable machines small enough and inexpensive enough to be purchased and used by individuals (Beer, 1966). One of the first of such machines was introduced in January 1975. Popular Electronics magazine provided plans that would allow any electronics wizard to build his own small, programmable computer for about $380. The computer was called the Altair 8800. Its programming involved pushing buttons and flipping switches on the front of the box. It didn't include a onitor or keyboard, and its applications were very limited. Even though, many orders came in for it and several famous owners of computer and software manufacturing companies got their start in computing through the Altair (Jacobs, 1975). For example, Steve Jobs and Steve Wozniak, founders of Apple Computer, built a much cheaper, yet more productive version of the Altair and turned their hobby into a business. After the introduction of the Altair 8800, the personal computer industry became a fierce battleground of competition. IBM had been the computer industry standard for well over a half-century. They held their position as the standard when they introduced their first personal computer, the IBM Model 60 in 1975 (Chposky, 1988). However, the newly formed Apple Computer company was releasing its own personal computer, the Apple II. The Apple I was the first computer designed by Jobs and Wozniak in Wozniak's garage, which was not produced on a wide scale. Software was needed to run the computers as well. Microsoft developed a Disk Operating System, MS-DOS, for the IBM computer while Apple developed its own software (Chposky, 1988). Because Microsoft had now set the software standard for IBMs, every software anufacturer had to make their software compatible with Microsoft's. This would lead to huge profits for Microsoft. The main goal of the computer manufacturers was to make the computer as affordable as possible while increasing speed, reliability, and capacity. Nearly every computer manufacturer accomplished this and computers popped up everywhere. Computers were in businesses keeping track of even more inventories for managers. Computers were in colleges aiding students in research. Computers were in laboratories making complex calculations at high speeds for scientists and physicists. The computer had made its mark everywhere in management and built up a huge industry The future is promising for the computer industry and its technology. The speed of processors is expected to double every year and a half in the coming years (Jacobs, 1975). As manufacturing techniques are further perfected the prices of computer systems are expected to steadily fall. However, since the microprocessor technology will be increasing, it's higher costs will offset the drop in price of older processors. In other words, the price of a new computer will stay about the same from year to year, but technology will steadily Since the end of World War II, the computer industry has grown from a standing start into one of the biggest and most profitable industries in the United States (Hazewindus,1988). It now comprises thousands of companies, making everything from multi-million dollar high-speed supercomputers to printout paper and floppy disks. It employs millions of people and generates tens of billions of dollars in sales each year. Surely, the computer has impacted every aspect of people's lives (Jacobs, 1975). It has affected the way people work and play. It has made everyone's life easier by doing difficult work for people. The History of Computer Technology This report briefly explains the history of modern computers, starting from the year 1936 to present day time. There are many models of computers documented throughout the years, but they only computer models mentioned are ones that I deemed too have had the greatest effect on computer technology back then and now. This report will show how in just forty years, computers have transformed from slow, room-sized machines, to the small and fast computers of today. Computers are a part of important everyday life, but there was a time when computers did not exist. Computers are one of the few inventions that do not have one specific inventor. Many inventors have contributed to the production and technology of computers. Some of the inventions have been different types of computers, while the others were parts needed for the computer to function effectively. Many people have added their creations to the list required to make computers work, adding to the overall technology of computers today. The term â€Å"computer† originally referred to people. It was a job title for those who did repetitive work with math problems. A computer is define as a programmable machine that receives input, stores and automatically manipulates data, and provides output in a useful format. The most significant date in the history of computers is in the year of 1936. This is the year the first â€Å"computer† was developed by a German engineer named Konrad Zuse. He called it the Z1 Computer and it was the first system to be fully programmable. The Z1 Computer had computing power, setting it apart from other electronic devices. Programming early computers became somewhat of a hassle for inventors and in 1953 Grace Hooper invented the first high level computer language. Her invention helped simplify the binary code used by the computer so that its users could dictate the computer’s actions. Hooper’s invention was called Flowmatic and has evolved into modern day technology. In the same year, the International Business Machines (IBM) was introduced into the computing industry, forever altering the age of computers. Throughout computer history, this company has played a major role in the development of new systems and servers for public and private use. Inventors saw IBM as competition within the computing history, which helped to spur faster and better development of computers. Their first computer technology contribution was the IBM 701 EDPM Computer. During the three years of production, IBM sold 19 machines to research laboratories, aircraft companies, and the federal government. The first computer physically built in America was the IAS computer. It was developed for Advanced Study at Princeton under the direction of John Von Neumann between1946-1950. (History of Computer Technology, 2011). John von Neumann wrote â€Å"First Draft of a Report on the EDVAC; in which he outlined the architecture of a stored-program computer (Computer History Museum – Timeline of Computer History, 2006). Electronic storage of programming information and data eliminated the need for the more clumsy methods of programming. An example of stored-program data computer is the IAS computer. Many modern computers trace their ancestry to the IAS machine and they are referred to as von Neumann (or Princeton) architecture machines. The IAS computer embodied the concept of a stored-program computer. The main memory contained two main categories of information, instructions and data. The computer had an ability to place different sequences of instructions in the memory which made the computer very useful. This allowed inventions to build computers to complete different tasks at different times. Such a computer can be reconfigured (reprogrammed) at any time to perform a new or different task. The Hungarian-born von Neumann demonstrated prodigious expertise in hydrodynamics, ballistics, meteorology, game theory, statistics, and the use of mechanical devices for computation contributed to the production of the modern day computer (Computer History Museum – Timeline of Computer History, 2006). In 1955, Bank of America coupled with Stanford Research Institute and General Electric; saw the creation of the first computers for use in banks. Researchers at the Stanford Research Institute invented â€Å"ERMA†, the Electronic Recording Method of Accounting computer processing system. ERMA updated and posted checking accounts and manually processed checks and account management. The MICR (Magnetic Ink Character Recognition) was a part of ERMA and allowed computers to read special numbers at the bottom of the checks. This technology helped with the tracking and accounting of checks transactions. ERMA was officially demonstrated to the public in September 1955 and first tested on real banking accounts in the fall of 1956. (Blain, 2005). Today, computer technology has transformed the banking industry. One of the most important breakthroughs in computer history occurred in 1958. This was the creation of the integrated circuit, known as the chip. The integrated circuit device is one of the base requirements for the modern computer systems. On every motherboard and card inside the computer system, are many chips that contain vital information on what the boards and cards do. Without the integrated circuit, the computers known today would not be able to function. The first commercially integrated circuits became available from the Fairchild Semiconductor Corporation in 1961. All computers then started to be made using chips instead of the individual transistors and their accompanying parts. Texas Instruments first used the chips in Air Force computers and the Minuteman Missile in 1962. They later used the chips to produce the first electronic portable calculators. The original integrated chip had only one transistor, three resistors and one capacitor and was the size of an adult's pinkie finger. Today, an integrated chip is smaller than a penny and can hold 125 million transistors (Bellis). The late 1970s saw the popularization of personal computers and the progress continues from then until now. An explosion of personal computers occurred in the 1970s. The Tandy Corporation was one of the leading companies in computer technology. Their most popular invention was the TRS-80 arriving on the market in the late 1970s. It was immediately popular, selling out at Radio Shack where it was exclusively sold. The TRS-80 was sold for only $600, making it affordable for many individuals to own their own personal computer. Within its first year, over 55,000 consumers bought Tandy TRO-80 to use in their home or office and over 250,000 of them sold in the next few years. Tandy Corporation’s TRS-80 had a keyboard and motherboard all in one. This is a common trend that other companies today use for their personal computer products. TRS-80 also included office applications, including a word processor, calculator, and early spreadsheet capabilities (The People History – Computers From the 1970s). People during the late 70s embraced personal computers and used them for a variety of reasons, such as, games, office applications, home finances, storing date, and many other necessary usages. In 1975, Apple Computers was founded by Steve Jobs and Steve Wozniac. The Apple II was launched in 1977 and was an immediate success as well. Apple created the â€Å"home/personal computer† that could be used by anybody. The success of the Apple II established Apple Computers as a main competitor in the field of personal computers. Then Dan Bricklin created a spreadsheet program called VisiCalc for the Apple II. It went on sale in 1979 and within four years it sold 700,000 copies at $250 a time (Trueman, 2000). By 1980, there were one million personal computers in the world. Computers have come an enormous way since their initial establishment, as the earliest electronic computers were so large that they would take up the entire area of a room, while today some are so small that they can fit in your hands. While computers are now an important part of the everyday lives of human beings, there was a time where computers did not exist. Knowing the history of computers and how much progression has been made can help individuals understand just how complicated and innovative the creation of computers really is. The first programmable digital computers invented in the 1940s have dramatically changed in appearance and technology from today. They were as big as living rooms and were about as powerful as modern day calculators. Modern computers are billions of times more capable than early machines and occupy less space. Simple computers, such as smart phones, are small enough to fit into mobile devices, and can be powered on by a small battery. In today's world, computers play an incredibly large role in the way the world exists in general, and the majority of tasks could actually not be completed if not for the use of computers. Although there are certainly some areas and jobs that cannot yet be completed solely by computers and which thus still require actual manpower, for the most part, computers have helped to make life significantly easier, productive, and more convenient for us all. Future computer technology will help solve many medical problems by reinterpreting sensory data and modulating brain activity. Technology will be so advanced that it may allow people who have lost the use of their limbs to use robotics to regain their disabled movements. The future of computer technology is very bright and welcomed indeed. Current trends, research, and development happening at a lightning speed supports this statement. Our children today will see a whole new world of technology with computers within the next decade. Works Cited http://inventors.about.com/od/istartinventions/a/intergrated_circuit.htm http://www.computerhistory.org/timeline/?category=cmptr http://www.thepeoplehistory.com/70scomputers.html http://www.historylearningsite.co.uk/personal_computer.htm

Sunday, September 29, 2019

Kpmg Analysis

An overview of what the company does its history and its product/service range KPMG is a multinational leading professional services firm, which deals with both audit and tax with over 10,000 partners and staff. They have achieved a vast amount of awards for both employment and health and safety, and this in turn reflects their dedication to excellence in their services. In 2008, KPMG merged with other firms in Europe, which formed KPMG Europe LLP. This therefore makes the company the largest integrated accountancy firm in Europe, with the headquarters based in Frankfurt. KMPG has a wide range of human resources, and these results in a diverse and highly skilled workforce. Furthermore, it can be seen that KPMG treat their workforce as an intangible resource, and this contributes to the firm’s competitive position. KPMG deal with three key areas: audit, tax and pensions and advisory. Their audit department deals with decision making within capital markets (KPMG, 2011 p. 1). Therefore, they provide a service to stakeholders, by ensuring that they are able to independently audit organizations. Their tax and pensions function helps individual organizations to reduce their tax burden and to ensure they meet the highest levels of compliance. Therefore, this involves key areas such as corporate reputation, pensions, and effective tax rates. Finally, they offer advisory support, which supports businesses through their business life cycle. This therefore helps and encourages firms to develop within regulatory environments. An analysis of the firm’s macro-environment Table 1: PEST analysis Political Increased governmental regulation. Increased taxes reducing consumer spending and corporate spending. Focus on environmental governance for example: environmental auditing. Economic Difficult and restrictive economic times. Businesses closing down on the UK high street. Unstable economic times, which has resulted in an increased focus on the financial sector. Social Consumer demand for CSR. Social concerns over the stability of the economy – this result, in firms such as KPMG coming under increasing scrutiny. Technological Integration of economies – the need for global expansion. Boundless economy – technology has facilitated 24-hour communication across borders. Advances in technology, which can be used to promote the detailed nature of KPMG’s services. The PEST analysis highlights a dynamic environment, which is ever changing. In particular, it can be seen that the company must utilize strategic tools to understand and deal with many of the issues presented in the PEST analysis. At present, the main difficulties facing the firm are in the economic and political environment. The economic recession has resulted in a scrutiny of the financial sector, and this in turn demands a need to offer an increasingly integral service. Furthermore, the secondary result of which has been increased regulation, which, not only affects KPMG itself but the many services it offers to its clients. An analysis of the company’s microenvironment Figure 1: Porter’s Five Forces Porter’s five Forces model is an excellent tool for understanding how powerful is a company in its particular business environment. It is very useful, because it can recognize the business’s strength in the competitive market and the possible future position will occur if the company thinks to change its plans. As a result the firm can take benefit in a condition that has power; also it can avoid any wrong steps in the future. On the other hand it can improve a situation that seems to has weaknesses. †¢Competitive rivalry: As KPMG belongs to a market that can be defined as oligopoly; the level of competition is not too high. This kind of market is controlled by the â€Å"Big Four† because they share a huge proportion of the market. Because of this the firms have the power to have high fees. †¢Power of Suppliers: The main purpose of KPMG is the provision of services. As a result of this, the major asset of the business is its own individual’s employees and members. For that reason the firm should seek to keep and extend its talents and trying to hire more qualified accountants. †¢Power of buyers: Customers are powerful in that kind of market. They can easily switch to another competitive firm because the costs of doing this aren’t too high. In addition the services provided by the â€Å"Big Four† are similar and with the same standard and this makes even easier the decision of a customer to move to a similar firm. †¢Threads of substitutes: The thread of possible substitutes in the accounting services is very small because there are not obvious substitutes of those services in the market. †¢Threats of new entrants: The market is conquered by the â€Å"Big Four† so the barricades of new entries in the market are very high. However, it is more common that small firms do not choose one of the big firms. As a result of this there is some space left for new small companies to enter the market. Table 2: SWOT Strengths 1)Asset leverage 2)High research and development focus. 3)Areas of online growth. 4)Strong management team, substantial focus on HR. )Strong brand equity 6)Strong financial position, which allows the firm to internationalize. 7)Strong European presence. 8)Competitive pricing of services. Weaknesses 1)Weak focus on real estate. 2)Vulnerability to litigations over gross negligence in audit practice. 3)Over-reliance on European market – need to understand more developing markets such as China and India (Wilson and Purushothaman, 2003 p. 19). Opportunities 1)Product and service expansion 2)Entry into emerging markets. 3)Future acquisitions. 4)Increased expenditure on infrastructure could increase demand for advisory services. Threats 1)Dynamic and competitive environment. 2)Increased regulation, resulting in a need for a throughout service. 3)Exchange rate fluctuations 4)Changes in the economic environment. 5)Global economic slowdown. The SWOT analysis indicates that the firm has strengths, which aid their position in a competitive market. Furthermore, it can be noted that the firm use such strengths to position themselves in the marketplace; in turn this promotes the resource-based view of strategy which focuses around the notion of ‘core competencies’ (Barney, 1991 p. 99). The threats outlined can be responded to by reviewing the macro environment, and the implementation of strategic tools, which may help to overcome any weaknesses. Finally, the opportunities outlined suggest that the firm should internationalize outside of Europe, this would extend the firms client base, and would allow them to tap in to developing markets such as China and India. This is in line with the BRICS study (Wilson and Purushothaman, 2003 p. 19): which, indicates that by 2050 China will be the world’s largest economy. Thus, an appreciation of the Eastern world is needed by KPMG to ensure success in the future. Evidence of an audit of key competences within the company The first key competency KPMG have is ‘reputation’ this is an intangible asset and one, which sees KPMG respected for a high caliber of services. This is the result of professional, and skilled staff, and a vast extent of knowledge, which can be applied to a vast array of business situations. Reputation is needed when offering such services which, require throughout and exact processing, for example: firms trust KPMG to handle aspects such as Tax and financial advisory, and thus often reputation is a key driver of success in this market. This is linked to the competency of professionalism, in which, strong ethical values of integrity and honesty provide the foundation for the firms work. Moreover, a key competency of the firm is their ability to develop a strong and skilled workforce. A focus on staff as an intangible resource; is something which aids the firms competitive advantage. For example: as Barney (1991 p. 99) notes: it is important that a firm have competencies which are unable to be imitated by their competitors, this in turn allows the firm to gain a strong position in the market and reduce competition. Therefore, it can be seen that the firm have a key competency of transforming the HR system to one which supports overall organizational learning, this is seen as something which supports competitive positioning (Pucik, 1988 p. 1). Accountability is a competency, which drives KPMG’s success. First and foremost, the company is operating in a dynamic, which demands transparency. Thus, the firm can be seen to take accountability for their actions, and this is something, which is supported by the firm’s organizational culture. Organizational culture is defined by Schein (2010 p. ) as ‘the shared norms and values, which are deeply rooted within an organization’. KPMG have a positive culture, which is upheld by values of customer service, customer satisfaction and the building of strong and meaningful relationships. Organizational culture can be seen as a competency, as it values can be translated into tangible resources such as increased clients, and stronger ex ternal relationships. This is linked to KPMG’s focus on making an impact, their clients expect the firm to make an impact and in turn build strong business relationships. Therefore, a strong organizational culture, which upports such values, supports the overall strategic direction of the firm. Needed in a dynamic environment, is the ability to be flexible and problem solving in an open, and innovative manner. These are two competencies which KPMG can be seen to have, in particular these are competencies which highlight how the firm has a key aim to be able to analyze complex data and reach an appropriate solution, in a manner which is simple for their clients to understand. Thus, in summary, it can be seen that the firm have an ability to translate their key, core competencies to contribute to the strategic success of the firm. The most important competencies to the firm are those, which are intangible in nature, as these are aspects, which cannot be imitated easily by their competition. In turn, such intangible resources often result in tangible results, as we can often see a link between the two. For example: higher levels of customer service are likely to result in a larger client database. A forecast of likely future prospects for the company’s market and recommendations as to how it should react to potential changes The ability of a firm to respond effectively to change is vital to the modern day organization operating in a dynamic environment. KPMG have a strong focus on their human resources, and this has resulted in the development of a workforce, which are committed to the strategic goals of the firm. Thus, as Hayes (2010 p. 12) notes a flexible workforce is needed to remain competitive, and therefore the firms reaction to any potential changes in the market is likely to be aided by their investment in their staff. KPMG’s future market is threatened by increased regulation. For example: in 2007 the company was found guilty of criminal wrong doing with regards to tax fraud (Department of Justice, 2007 p. 1). Such ethical wrongdoings damage company reputation, and this in turn is something, which is likely to affect the future of the firm. A firm such as KPMG gains a vast amount of business from reputation, and thus any damage to such may have a negative effect on their future clients. Therefore, in order to respond to increased regulation, the firm must ensure the highest ethical conduct at all times, and high levels of transparency. In addition, KPMG’s clients are faced with increased legislation regarding business reports, and thus, this promotes a need for a thorough service from the firm. Changing legislation will have a result on the firm itself, and increased expenditure is likely to be needed to ensure that all workers have the skills necessary to carry out an effective service. With regards to the external environment, developments in trends are resulting in future changes for the company. Firstly, the company is offering in a dynamic environment, and therefore is required to thrive and not just simply survive. In order to respond to competition it is important that the firm looks forward to the future, and implements a system of strategic planning. In turn, the firm should seek to provide accurate and insightful information to all of their clients, which will result in the firm adapting the finance function to enable their clients to survive during turbulent, economic times. Moreover, due to the economic climate, the needs of their consumers are changing. In order to respond to such a trend, KPMG must simplify complex business issues in a manner, which promotes a greater alignment of business processes. Many firms in a difficult, economic environment often have a short-term focus, and this is something which KPMG themselves need to steer away from, and something which they have to dis-persuade their firms from doing so. Instead, a focus on sustainable business is needed which, in turn will enable more than just reduced short-term costs. Thus, in summary the economic climate has created a difficult environment for both KPMG and their clients, and in order to survive such times and prosper in the future, the firm must position the company in a manner, which promotes success. The final trend portrayed in this section is an increased focus on corporate social responsibility. This is something which is required both from the company itself and it can be seen that KPMG’s CSR actions may influence the decisions of their clients. At present, KPMG have a strong belief that social responsibility and business success go hand in hand, and thus promote charitable donations, volunteering from their workforce and a key emphasis on the environment. In the future, a greater emphasis will be put on corporate social responsibility, and KPMG must respond to such changes by conducting environmental audits, promoting stakeholder theory, and an overall dedication to the cause. Strategy can be used to conduct external analysis, and such analysis will enable a firm such as KPMG to respond to future changes in the market. For many firms, their relevant success or failure is dependent on the ability to strategically align themselves to the external environment (Henry, 2007), and as many markets, in particular the financial market are as dynamic as ever, it is important that the firm are able to discern any trends which may later alter the firm’s strategy. As shown in this paper, the environment consists of both the macro and microenvironment, and this in turn is something, which promotes the complexity of the market. In turn, it is often thought that the competitive environment is the one, which has the most direct impact on the firm; however, it is the more external macro environment, which creates the most problematic situations for the firm, in particular, if a firm is unprepared for change. Dill (1962 p. 12) states that ‘at the one level the environment is not a very mysterious concept, it means the surroundings of the organization, and the concept becomes challenging when we try to move from its simple description to an analysis of its properties’. Thus, it is recommended that KPMG partake in environmental analysis in order to provide the companies with the opportunity to discern trends, and then from these trends create strategies, which enable the firm to best position itself. By using internal strategic capabilities such as reputation, the firm may be able to diversify into other markets, which are noted as being both less challenging and competitive. The prediction of the future is difficult, and is always uncertain due to discontinuities. However, by scanning the environment, the firm can be able to detect any weak signals, weak signals are those trends which ‘may be largely insignificant due to the fact that there impact is yet to be felt, however, the careful monitoring of such can result in the firm being better strategically adept for such uncertainties’ (Henry, 2007 p. 8: Van der Heijden, 1996). Van der Heijden (1996) notes how there are three different types of uncertainties, which all play a part in the external environment. These being: structural uncertainties, risk, and unknowable. Of these both structural uncertainties and unknowable’s are the two most difficult to comprehend, due to the fact that these are events which either cannot be imagined or do not offer any evidence of such a probability. Thus, noted in the literature, is the tool scenario planning (Schoemaker, 1995) which, can be used to deal with even the most unimaginable of events (Porter, 1998). If KPMG were able to adopt the concept of scenario planning, they would be more likely to gain a strong competitive position. Scenario planning is a tool, which can be seen to ‘stand out’ due to its ability to ‘capture a whole range of possibilities in great detail’ (Schomaker, 1995). Thus, it can be seen that scenario planning aims to overcome the under and over prediction of change, it does so by adopting a middle ground, in which, it considers both unknowable and uncertain events. Word count:2546 References Barney, JB (1991) ‘Firm resources and sustained competitive advantage’. Journal of management, 17 (1) pp. 99-120. Department of justice (2007) ‘KPMG to pay $456 million for criminal violations in relation to largest ever tax shelter fraud case’ [online]. Available from: – http://www. justice. gov/opa/pr/2005/August/05_ag_433. html [Accessed 18. 03. 11]. Dill, W. ‘The impact of environment on organizational development' In Mailick, S. and E. Van Ness (eds) Concepts and Issues in Administrative Behavior. Prentice-Hall, Englewood Cliffs, NJ, 1962. Henry, AE (2007) ‘Understanding strategic management’. Oxford University Press: Oxford. KPMG (2011) ‘What we do’ [online]. Available from: – http://www. kpmg. com/UK/en/WhatWeDo/Pages/default. aspx [Accessed 19. 03. 11]. Porters fives forces model : Industry analysis model [online]. Available from: http://www. learnmarketing. net/porters. htm [Accessed 21. 03. 11] Porter, ME (1998) ‘On competition’. Harvard University Press: Harvard, Boston. Pucik, V (1988) ‘Strategic alliances, organizational learning, and competitive advantage: the HRM agenda’. Human resource management, 27 (1) pp. -16. Schein, EH (2010) ‘Culture and leadership’. John Wiley and Sons: London. Schoemaker, PJH (1995) ‘Scenario planning: a tool for strategic thinking’. Sloan management review, 36 (2) pp. 25-32. Van der Heijden, K. (1996), Scenarios: The Art of Strategic Conversation, Wiley, New York, NY. Wilson, Purushothaman (2003) ‘Dreaming with BRICS: the path to 2050’. Global economics paper 99, [online]. Available fr om: – http://antonioguilherme. web. br. com/artigos/Brics. pdf [Accessed 20. 03. 11].

Saturday, September 28, 2019

Having a Child does Reduce Marriage Satisfaction Essay

Children should be source of happiness to a family, but that is not necessarily the case. The addition (or even removal) of a person from a family may cause the family to require a lot of reorganization in order to maintain its normal system [LeMasters, 1957 cited in Twenge, Campbell and Foster (2003)]. The inclusion of a new person into the family is usually a kind of crisis since it has to be supported by a reorganization of the family that would strive to restore normalcy while accommodating the new person. LeMasters (1957) likened the reorganization process to a crisis since it must involve making of concrete decisions to solve problems in old patterns of the family, which become somehow insufficient with the incoming of a new parson, especially a newborn. Insufficiency in a family due to the arrival of newborn arises due to several factors, which may be directly linked to the infant or indirectly affecting the parents. Nevertheless, babies at different ages have different requirements, and thus affect family systems in different ways. Twenge, Campbell and Foster (2003) noted that parents with children under the age of five years experience persistent lack of sleep due to the infants’ need for close attention particularly at night. In addition, such parents may also experience chronic tiredness, some form of guilt that they are not offering the best care (particularly if the infant keeps on crying), and a feeling of too much confinement at home to care for the baby. At the individual level, mothers may be concerned about their appearance, both in terms of the stress involve in taking care of the baby and in the physical attributes of the body after birth. According to Foley, Kope and Sugrue (2001), first time mothers are particularly prone to this kind of stress. For the fathers, a research recorded by Gottman (1994) revealed that becoming a father was partly the cause of declines in wife’s sexual responsiveness and ultimately, dissatisfaction in marriage. Moreover, fathers usually become burdened with a role to be sole breadwinners for the family since the women (even those who are working) have to be reduced to the role of housewives as they take care of babies in their early stages of growth. In general, when a married couple gets a baby, there is a tendency that the couple may be affected in number of ways. To begin with, there may be an increase in household chores and stress (since the baby has its own requirements in addition to the routine duties) (Twenge, Campbell and Foster, 2003). This may be amplified due to lack of adequate time for discussion between the couple as much of the time is directed to the baby. Secondly, the lack of discussion would result in poor companionship of the couple. Thirdly, as the gap between the couple and the baby becomes the center of focus, the couple’s sexual life may be annihilated (Twenge, Campbell and Foster, 2003). In addition, as a married couple gets distant due to the arrival of a baby, they may seek solace in their daily activities but this is likely to confer a number of disadvantages to the family since there may be an overload in accumulated roles of each parent (partner). McCary (1975) and Morgan (1988) have shown that in case the wives are not working, the arrival of a baby exacerbates depedendecy of the wife on the man hence the man feels more superior at the expense of the demoralized wife. Hence, birth raises inequity between married partners. Finally, having a child generates negative assessments of marriage, especially among the non-traditional women who may look at giving birth and taking care of a baby as too tedious and involving a task (Twenge, Campbell and Foster, 2003). In spite of the many challenges faced by families in having children, some authors (such as Foley, Kope and Sugrue [2001]) have noted that having a child may decrease marriage satisfaction, increase it or have no effect at all. Hence, all the aforementioned effects of having a child cannot be generalized to all families since different facilities have different levels of socialization and economic standing among other factors. It is thus worth noting that having a child confers various effects on the family setting. This paper will focus on the effect of having child in marriage but will be biased towards the preposition that having a child or children does reduce satisfaction in marriage. The paper will involve a review of past works on the concept accompanied with concise discussion based on the findings. In order to come up with a deduction on the topic, conclusions will be derived from the discussion to justify if the perception indeed holds water.

Friday, September 27, 2019

Strategic Management Essay Example | Topics and Well Written Essays - 1000 words - 1

Strategic Management - Essay Example This resulted into deforestation. In Autocratic or Authoritative management style, the senior managers take all the important decisions without considering the involvement of workers. Senior managers do not trust their workers; they simply give orders to them. The disadvantage of Autocratic style of management style is that there is only one way communication, and this creates â€Å"them and us† attitude in between managers and workers. In FC, the organizational structure was hierarchical too. So, there was a wide gap between top to bottom order. Due to this kind of command and control system, workers just did only what they were told to do because of fear. Centralization is a system in which the concentration of decision making lies in few hands only. All the important decisions are subjects to the approval of top-level management and other levels can implement these decisions as per the directions of top level managers. On the other hand, decentralization means systematic de legation of authority to all levels of management and to all departments of an organization. In 1995, David Bills was appointed as the Director General of FC. One notable point about him is that he is an outsider from Australia. Environmental concern is one of the big issues in front of FC. Few groups raised environmental issues against FC; they accused FC of lack of awareness in various environmental and animal right issues. It became very crucial to FC’s economic survival. Nowadays, the term 'corporate social responsibility' is much closer to all organizations. Corporate social responsibility refers to the way companies integrate environmental, social and economic concerns into their values and operations in an accountable and transparent manner. It is related to long term growth and success of the organization. It plays an important role to contribute to the sustainable growth of communities. It became a responsibility of any organization to foster and promote corporate so cial responsibility. Another problem in front of David Bill is to change the FC’s culture. It is more difficult to change the existing culture than create a new culture in a new organization. When an organizational culture is already there, it is difficult for people to forget their old behavior, beliefs and assumptions and to adapt a new behavioral pattern. In business world, one thing can be least assured, which is change. If any organization experiences changes, resistance among employees is common. Executive support and training are most important elements to create a cultural change. When David Bill joined the organization, he found a very challenging task in the organization and that was to boost the morale of employees, who had a very low morale and they considered the organization to be a ‘sinking ship’. For him, the most important task was to raise the morale of the employees and to employ them as profitably as he could in the organization. The main aim of the FC was to rebuild and maintain the timber reserves. But the organizational structure was highly influenced by the â€Å"hierarchical military systems of the time and the use of military language† (McCann 2004, p. 949). Hierarchical system in an organization allows for understanding the direct line of authority. There should be a line of authority.

Thursday, September 26, 2019

Koalin Loess(Glacier&Periglacial landscapes) Essay

Koalin Loess(Glacier&Periglacial landscapes) - Essay Example a terrain: A terrane is a crustal block or fragment that preserves a distinctive geologic history that is different from the surrounding areas and that is usually bounded by faults. Accreted terranes are those that become attached to a continent as a result of tectonic processes. In more elaborate words, it is a large geographical feature, often a mountain range, that geomorphologists believe was once a group of islands that sat on one tectonic plate that was being subducted under a continental plate. When the part of the plate on which the islands rode began to be subducted, the islands jammed up the subduction zone and the plate behind it broke. As a result, the islands became attached to the side of the continent. As this happened again and again, the island arc became an inland mountain range. The Himalayas ,according to the modern theory of plate tectonics, was formed as a result of a continental collision or orogeny along the convergent boundary between the Indo-Australian Plate and the Eurasian Plate. This is referred to as a fold mountain. The collision began in the Upper Cretaceous period about 70 million years ago, when the north-moving Indo-Australian Plate, moving at about 15 cm per year, collided with the Eurasian Plate. About 50 million years ago, this fast moving Indo-Australian plate had completely closed the Tethys Ocean, the existence of which has been determined by sedimentary rocks settled on the ocean floor and the volcanoes that fringed its edges. Since these sediments were light, they crumpled into mountain ranges rather than sinking to the floor. The Indo-Australian plate continues to be driven horizontally below the Tibetan plateau, which forces the plateau to move upwards. The Indo-Australian plate is still moving at 67 mm per year, and over the next 10 million years it will travel about 1,500 km into Asia. About 20 mm per year of the India-Asia convergence is absorbed by thrusting along the Himalaya southern front. This leads to the

Servant Leadership in the Bible Dissertation Example | Topics and Well Written Essays - 1000 words

Servant Leadership in the Bible - Dissertation Example The Holy Quran also portrays the leader of the people as the servant who should work to satisfy the people rather than be the master commanding them. The religious connotation looks at leadership in this form as being a part of the self-actualization factor as noted in Maslow’s hierarchy of needs (Joseph & Winston, 2005). Â  Robert Greenleaf saw a servant leader as the person who acted as a servant first (Parris, & Peachey, 2013). The individual does not begin acting as a leader if deep within the urge to serve is absent (Gonzaga, 2005). The main idea should be to create new platforms that will make it easier to serve the people and make a conscious choice to administer as it appears based on the autonomy required for the growth of the individual’s satisfaction (Gardner, Cogliser, Davis, & Dickens, 2011). This test is administered in harder situations where the difficulty tests the leader’s ability to come up with better means of dealing with failures as well as the relationship between leaders and workers (Kernis, & Goldman, 2006). Greenleaf argues that the deep-seated need and desire to serve others provides the core need for one to be a servant. Servant leaders have the natural feeling that emanates from this desire (Walumbwa, Avolio, Gardner, Wernsing, & Peterson, 2008). It can be c reated by making conscious aspirations and sticking the core attributes that define the way this can happen without losing track of the benefits derived from such an action. Most of these benefits are intrinsic (van Dierendonck, 2011). Â  The paradoxical nature of servant and leadership is not to be missed. When Jesus was washing the disciple’s feet, they were apprehensive of this act (Kool &van Dierendonck, 2012). They wanted to be the ones doing the washing and not Jesus.

Wednesday, September 25, 2019

THE INFLUENCE OF SOCIAL NETWORKING SITES (SMS) ON THE INTERPERSONAL Research Proposal

THE INFLUENCE OF SOCIAL NETWORKING SITES (SMS) ON THE INTERPERSONAL RELATIONSHIPS OF STUDENTS - Research Proposal Example He attributes this phenomenon to the ability to communicate with people having a set of common interests using SNS technology. For instance, SNS allow users to form groups based on a specific subject, allow private communication among select people and provide features to show or hide specific user information and messages based on a set of predefined rules. Such components allow users to establish and nurture virtual relationships regardless of geographical location. This virtual relationship among two or more individuals can be based on various factors including past associative history (classmates, neighbours etc.), love, business or any other form of social interaction. Traditionally, interpersonal relationships were limited to physical interaction through scenarios such as family, marriage, employment, social clubs etc., most of which come under the purview of legal frameworks, constraints and scrutiny. Social networking however is not restricted entirely within any of these boundaries and even facilitates the establishment of relationships among individuals who may have never met or seen before physically. Ozok (2009) stresses that this excitement behind the possibility to meet new people, particularly of the opposite sex, that encourages students using socials networking. He further adds that virtual interactions through SNS are also capable of influencing the relationships of users with people close to them and can be either good or bad i n the nature. The paper is a research proposal for studying the use of social networking among students in the age groups of 13-17 years. The proposed research topic was selected as it is evident that social networking is extremely popular among students and develops a major proportion of their activities performed through the Internet. Chatting with friends, posting messages or sharing photographs are some of the tasks that

Tuesday, September 24, 2019

One article analyses and make the Recommend marketing strategies to

One analyses and make the Recommend marketing strategies to the relevant cloths industry on your analysis of the chosen phenomenon - Article Example nds’ End were rated the best by the respondents primarily because these online retailers provided precise descriptions of the apparel and correct sizing information. Equally important in the superior ranking of these clothiers was the fact that they had an easy to browse, informative website. Majority of the respondents also felt that they got true value for money spent on online clothes-shopping. The survey also revealed the flip side of online clothes-shopping. There were major issues with size accuracy of the clothes, which impeded customers intending to buy clothes online. Returning clothes and costs associated thereon was considered a huge disadvantage by many respondents. 72 percent of the respondents complained about the lack of transparency in divulging shipping costs by online retailers. There were certain instances of billing mistakes and wrongly filled orders. In addition to these problems, consumers refrain from online shopping because of privacy concerns and issues regarding security of financial transactions. Some customers find online shopping very confusing (Colberg 2002). The analysis of the survey reveals that customers are not satisfied, among other things, with the process of exchange of goods purchased online. The online clothing retailers should make the process of returns trouble-free for the consumers. A straightforward and transparent policy regarding this aspect will provide a huge boost to their sales (Rosencrance 2000). Many consumers would be tempted by a generous returns policy that promises to exchange the item or simply return the item and take the refund of its purchase price. The retailers can provide the consumers prepaid U.S. Postal Service labels which are valid for a certain period of time. The customers can use these labels for returning the apparel with which they are not satisfied. This will make the process of returns simple and inexpensive for the unsatisfied customer. With an easy returns policy in place, customers

Monday, September 23, 2019

Facilities and Maintenance Systems for Hospitality Research Paper

Facilities and Maintenance Systems for Hospitality - Research Paper Example It is evidently clear from the discussion that the maintenance staff must be equipped with data and integrated voice, mobile devices in order for them to be always within reach and easily available.   It also enables the hotel managers to have visibility into and monitor the progress of duties and work orders. Push-to-talk can be used when instant attention is needed or text messaging. Mobile computers are also effective during processing of work orders and generating automatic audit trails. However, most hotel owners and managers have some weaknesses when it comes hotel designing. They don’t put into consideration the importance of attractive hotel design. Failure to ensure attractive hotel design results in small customer population thus less revenue. There is also need to carry out some maintenance test to ensure quality services. The success of all leading resorts and hotels depends on the quality of services that are offered to the customers. By storing all the building s, rooms, equipment, and floors in a management solution asset, it is possible to track the management and maintenance of everything. The report of the cost will provide the managers with the costs of maintenance at any organizational level. The hotel managers can also set up preventive maintenance approaches for generators and HVAC units to avert failures. Eventually, the management maintenance system can be used to reduce costs, track maintenance, and ensure a quality service to the customers. In addition, using maintenance system can help a hotel or resort management to track management cost, extend the life of assets, provide high-quality services to customers, maintain efficiently and a clean environment. Additionally, facility maintenance system helps in improving labor productivity, reducing costly downtimes, minimizing investment and maintenance costs. Just like a person meets another person for the first day, it takes customers and travelers approximately 60 seconds to lear n and gain the attraction of a resort or hotel. Travelers and customers start by examining the parking area, dà ©cor, signage, the carpet, or the smell of the environment.

Sunday, September 22, 2019

Criticism of Quitak’s Child observation Essay Example for Free

Criticism of Quitak’s Child observation Essay Quitak first explains that she is â€Å"working on the assumption that the problematic aspects of our experience contain the maximum potential†. However I think it is important to clarify from the outset, how she reached this assumption, as the reader does not know whether she went into the observation with this belief or whether these assumptions were developed as a result of her observation. There is another important omission relating to who the author actually is. She hasn’t positively stated that she is a Social Work student, although this is implied when she states that her observations had â€Å"implications for social work. † Therefore it is difficult to ascertain her purpose for carrying out the observations. Furthermore Quitak fails to mention how she came to select the child included in her observations, how many observation sessions took place and the length of the sessions. Therefore the reader is unable to assess whether there were any issues of bias involved in her selection process. The fact that she is the product of English middle class parents means she may be going into the study with particular assumptions, as she is observing a child who has a Palestinian parent. A significant area which was lacking in her observations was her inability to â€Å"tune in to Selena’s inner world† (pg 250), although Quitak does acknowledge this omission. She didn’t really try to question and understand Selena’s behaviour or how she might be feeling when she demonstrated behaviour she didn’t like, which meant her observation suffered as a result. King (2010) stresses the importance of â€Å"to access the child’s emotional world†.

Saturday, September 21, 2019

Christoph Büchels Simply Botiful: Overview and Analysis

Christoph Bà ¼chels Simply Botiful: Overview and Analysis Christoph Bà ¼chel. SIMPLY BOTIFUL 11.10.2006 – 18.03.2007 Hauser and Wirth Cheshire Street London Above the entrance to Christoph Bà ¼chel’s ‘Simply Botiful’ there is a ‘Hotel’ sign. Entry to the new ‘Hauser and Wirth’ space in Brick lane is made by walking past a dusty reception. Following this, gallery attendees are apprehended by an attendant with a clip board, who asks guests to ‘sign-in’, before taking their coats and bags. If you read carefully the documents that you are signing, it turns out that you are wavering your rights to sue, should you suffer damage to clothing, or to yourself during your tour of the exhibition. The reasoning behind this becomes clear as you proceed. Very quickly it is apparent that we are in a Hotel style mock up.[1] Once one has ascended the stairs into the main ‘gallery’, they are confronted with a hallway packed with small make shift beds. Taking the first door to the right (as most attendees will be inclined to do) one finds themselves in a room that seems a little out of place. It appears to be the study room of someone deeply interested in Psychoanalysis and Anthropology: The walls are covered in early naà ¯ve-imperial pictures of native persons and unusual animals, whilst a vitrine lies full of bones, clay pipes and other artifacts. In one corner resides an imposing Analysts chair. The association here makes one think of a long line of artists and writers that have dealt with psychoanalysis and analytical ideas (such as Dali), yet there is another element to Bà ¼chel’s work. Far from merely presenting psycho-analytical ideas in a pictorial form Bà ¼chel actually throws the gallery viewer on themselv es, pushing them into a personal analysis of their situation. In this first room one can hear the sound of loud (but distant) Thrash Metal music that appears to come from inside a wardrobe, on the near side of the room. Those more curious will fine that in the wardrobe, behind a couple of mangy suits there is a small hole, rising about 2 feet square from the base of the wardrobe. Those more curious still will climb through the hole, not even sure of they are allowed, or supposed to do so. It is in this sense that: ‘Bà ¼chel’s complex installations force his audience to participate in scenarios that are physically demanding and psychologically unsettling.’[2] On entering into the wardrobe the individual finds themselves in a room, with a small bed, some bags of discarded children’s toys and a burnt out motorcycle in a glass cabinet. The music becomes much louder – pushing the boundary of what is safe to listen to. Emerging from the cupboard again, one must take the chance that a small audience has amassed in the first room, and will be watching you as you crawl on hands and knees back into the relative normalcy of the analyst’s office. Aspects such as these give the show a performative element, as each gallery attendee becomes entertainment for others: ‘He explores the unstable relationship between security and internment, placing visitors in the brutally contradictory roles of victim and voyeur.’[3] Other rooms on this first floor quite clearly point to this space being a brothel (ostensibly). Porn magazines, crumpled bedsheets, red lights and condom packets litter three more bedrooms and suggest an uneasy seediness. Upon entering these rooms, one feels like an intruder and is put in the position of literally feeling like both victim and voyeur. In a sense, this is the trick that conceptual/readymade based art plays. Duchamp’s ‘Fountain’ (made under the pseudonym ‘R.Mutt’) – an upturned urinal that he attempted to exhibit in an open exhibition in 1917 taunts the viewer. It is art, because the artists himself says so: ‘Whether Mr. Mutt with his own hands made the fountain or not has no importance. He CHOSE it.’[4] Yet the viewer of a readymade is left in the position of feeling ‘duped’. Believing such pieces to be credible artworks involves a certain leap of ‘faith’. Each person must make this leap, aware that others are watching (thus they are a victim), but they also make this judgement over the artwork as the ‘voyeur’. Bà ¼chel’s semi-readymade, constructed from found objects in a converted warehouse gallery takes this a step further and really challenges the viewer: The viewer is challenged into questioning whether what they are looking at is art, and into considering their role within the artwork – as participants in it. In this sense, the gallery attendees become ‘readymades’. Once one has walked through the hotel, they arrive on a balcony, overlooking what appears to be a crossover between a workers yard and scrap yard, with several iron containers, and piles of disused refrigerators. Upon descending a set of iron steps one finds themselves free to roam amongst the detritus. One container is full of broken computer parts; another is virtually empty, except for a filthy table. The overall sense one gets immediately is one of poverty – another container holds sewing machines and rolls of fabric: presumably some kind of sweatshop. There is something harrowing about this, which is compounded somewhat by images of hardcore porn pasted to the walls of one container that features nothing but a makeshift punch-bag and a seemingly empty refrigerator. However, there is also something celebratory about Bà ¼chel’s huge semi-Readymade. Gallery attendees gradually become more comfortable and rush from one container to the next, probing deeper to find unexpected treasures. The refrigerator at the far end of the above mentioned container actually features a set of steps, descending to a tunnel carved through the ground beneath the gallery. Upon arriving at the other end, one finds a huge mound of earth, with Elephant or ‘Mammoth’ tusks protruding from one side! How to react to this is again down to the viewer, and throughout the exhibition, similar oddities are met with mixtures of fear, excitement, awe and humour. There is certainly a darkness inherent to Bà ¼chel’s work, and a strong controversial social commentary (beneath a container lorry in the workers yard, the gallery attendee finds a secret room featuring Muslim prayer mats, Bibles and pornography). However there is also a strong element that throws the viewer upon their own resources, forcing them to question the role of art. In a sense, this is what good art does. As philosopher Theodor Adorno argues: ‘It is self evident that nothing concerning art is self evident anymore, not in its inner life, not in its relation to the world, not even in its right to exist.’[5] This leaves art in the difficult position of constantly questioning itself, and one way of doing this is to present the viewer with a constant need to question their relationship with the artwork. This often makes for art that appears on the surface to be tragic. Yet the way in which art can lead the viewer to question not only art, but their own confidence in judging art actually provides challenges that may have positive results. Art gives one an opportunity to really engage with themselves and their environment in way that mass consumerist culture doesn’t. Adorno argues: ‘The darkening of the world makes the irrationality of art rational: radically darkened art. What the enemies of modern art, with a better instinct than its anxious apologists, call its negativity is the epitome of what established culture has repressed and that toward which art is drawn.’[6] Therefore Bà ¼chel’s somewhat twisted and tragic world actually breaks through the repressive element that society enforces. Perhaps this is one meaning that can be applied to the representation of the analysts/anthropologists office, which is the first room the viewer stumbles upon when entering the exhibition space. Further to this, Bà ¼chel’s show builds upon Joseph Beuys’ declaration that ‘We are all artists,’ (a declaration that itself built upon Duchamp’s proclamation that ‘anything can be art’): ‘EVERY HUMAN BEING IS AN ARTIST [†¦] Self-determination and participation in the sphere (freedom)†¦Ã¢â‚¬â„¢[7] In inviting the audience to partake in the artwork as both voyeur and victim, Bà ¼chel makes evident the capacity of all individuals to fulfill a role in bringing forth societal change as artists with the capacity to designate mere objects as art. The confidence inherent in such a judgement can from thereon be applied to other spheres of life. The success of Bà ¼chel’s exhibition resides in his demonstrating the above points without over complicating things. The viewer is drawn into an interactive art space that questions constantly, without necessarily being aware that they are put into the position of having to answer complex art/life riddles. Yet, at some point during or after the exhibition something of the nature of Modern and Postmodern/Contemporary art will be made apparent to them: For an artist to achieve this is a rare skill. Bibliography Books Adorno. T.W. 1997, Aesthetic Theory, transl., Hullot-Kentor, R., Athlone Press,  London Harrison. C. and Woods. P., Eds., 1998, On Commitment, Art in Theory: An  Anthology of Changing Ideas, Blackwell, Oxford. Exhibition Press Release Christoph Bà ¼chel. SIMPLY BOTIFUL 11.10.2006 – 18.03.2007 Hauser and Wirth Cheshire Street London [1] For a fully detailed internet ‘walk through’ tour of the exhibition see: http://www.ghw.ch/exhibitions/walkthrough.php?exhibition_id=415 [2] From the Press Release for ‘Christoph Bà ¼chel, Simply Botiful’. Hauser and Wirth Gallery, 2006. [3] Ibid. [4] Harrison C, and Woods P., Art in Theory: An Anthology of Changing Ideas, 1998, p248. [5] Adorno. T.W., Aesthetic Theory, Transl, Robert-Hullot-Kentor, 1997, p1. [6] Ibid. p19. [7] Harrison C, and Woods P., Ibid., p903. Forgive the fragmented nature of this quote. The text itself is equally fragmented.

Friday, September 20, 2019

Functions of Network Management

Functions of Network Management In this report, I will be explaining the functions of network management. There are many stages to creating a network, these are: Planning planning is crucial, as you will need to map out what kind of network you want to create and what its purpose will be Research researching what network devices and cabling will be required, also researching topologies to create a suitable network Design design is essential as you will need to know what your network will look like before its made Preparation begin creating your network, install the cabling and devices and connect them Development Set up the devices in the network and make sure they are on and ready to communicate Testing test the network and check that everything is up and running smoothly Maintenance if any issues arise, troubleshoot the errors and make sure that the network is stable Evaluation analyse and understand the network, if any problems occur, document them so it will be easier to troubleshoot in the future. Task 1: Functions of network management P4 Network configuration is necessary to allow computers in a network to communicate with each other. Configuration exists to control networks and allow troubleshooting or performance enhancements. There are many devices used in a network. The most important medium are routers and switches. When configured correctly, it will allow them to communicate which then allows users to communicate with each other. Fault management is compulsory in any network as it will detect problems and minimise failure. In case of failure, it will be prepared to troubleshoot the issues as quick as possible. By monitoring the networks, you can see if an error occurs it will ensure that the network is up for as long as possible. Fault management could be approached from a remotely controlled centralised console, which will allow you to easily reboot or troubleshoot one or more computer. Account management involves taking care of the users account and ensuring they can access all software easily. The admin will make user accounts for people in an office or school in order for the users to access their files at work. Account management groups together the users with the same rights on their accounts, which makes it simpler for the administrator as they can make a change to the entire user group rather than each account. Account management is required in large networks like schools and organisations as it will allow the administrator to manage multiple accounts easily as it would be hard to install software or enable access to every single account separately. The purpose of performance variables is to work out how key parts of the network are and have been performing. By checking this, it is possible to measure whether the performance is decreasing or increasing, this is crucial because if performance is decreasing, you will be able to see it. Examples of performance variables are user response times and network throughput. Network throughput is how fast data is transferred through a network. User response times are how fast the network is for users. Line utilisation is the amount of data on the cabling, if too much data is loaded onto the cable it will alter performance. Security is essential in any network in order to ensure safety; by implementing security in a network, you will prevent viruses and other threats such as hacking. It is possible to get viruses in many different forms such as files and documents on the internet, spyware, even physical issues can be a threat such as fires. Because there are different types of threats to a network, there are different ways to deal with them all. Firewalls and antivirus software should be installed to prevent viruses from entering a network. If a virus is in a network, it can sabotage the performance of the network and put the companys data in jeopardy. It is also very important to back up the files to another server in case of an attack so if any data is lost it can be restored effectively. Data logging is recording all of the information that passes through a network, this will make it easier to identify problems in the network, as you will be able to look through the data and analyse where an error has occurred. Logs are not usually kept permanently as they may not be necessary. It is useful to have data logs in parts of the network where errors occur to help you identify them as soon as possible. Checking performance and traffic is essential to ensure that your network is performing as well as it can and clearing up traffic will improve performance. Reporting is a management feature which documents performance and the data usage throughout the network to the admins. The reports are often taken using systems such as Windows Server which reports response time and performance of packets. Task 2: Fault Management M2 Fault Management Fault management in networks is to locate and troubleshoot problems in the network. Fault management is important to keep the network running efficiently. Why is fault management necessary in networks? Fault management is essential, as it will allow the network to perform at its maximum capacity without being disrupted. If any errors occur, data in the network could be jeopardised so by troubleshooting errors as soon as possible it will allow the network to run with minimal errors. This should be carried out remotely as it will be time consuming to physically go to each device throughout the network. The main goals of fault management in any network is to: Monitor the network remotely Enable alerts to warn the network engineer about any failures Create logs to see past failures and prevent future problems One of the goals of fault management is to monitor the network remotely through a centralised device. This will allow the network engineer to control the network quickly and efficiently as they will not need to access each physical device which can be very time consuming. By monitoring performance, the network engineer can troubleshoot failure quickly. Another goal of fault management is to enable alerts to warn the network engineer when there is a fault in the network immediately. By ensuring that the network engineer is notified about faults, the fault can either be prevented or solved as quick as possible. This will make sure that the effect on the performance is minimal. Finally, creating logs of faults are essential as it will allow the network engineer to look back at it in the future and solve the problem quicker. This will also show how well the network is performing as you can see every fault that has previously occurred. If there is a recurring fault in the network, the engineer will be able to prevent it and ensure that it doesnt happen again. If the network is affected by a failure, this can alter performance and could cause the network to crash. This will be atrocious for the company as it can prevent staff from communicating and doing their jobs. Task 3: Routine performance management D1 Routine performance management is scheduled routine maintenance. This means that the network will be checked on a regular basis to ensure that it is up to speed. This is crucial for any business as you will want to make sure that the network is running smoothly and the companys information or data is not in jeopardy. If a companys network isnt checked frequently, it could severely impact the company as they could be in danger of losing data or if a part of the network is down, they wont be able to communicate which will end up losing the company money. There are many different types of risks ranging from physical issues to hardware issues, for example a fire which can endanger both lives and the hardware in the network and if a switch or router is overloaded which can cause it to fail or perform slowly which will also majorly affect the company. There are a few tasks that the network manager must do to check and keep the network up and running efficiently. Backups are extremely important in a network in case of data loss or failure. If any data is lost and the data hasnt been backed up in a long time this will be a huge setback for the company as they will have lost important information. It is important to backup data at least daily or weekly to ensure that you have the latest data available to restore in case of any errors or failures in the network. Backups can be made to multiple places. The most common one is to a remote server which will hold the companys data. This is efficient as the data is all stored in one place so it is easier to access and minimises downtime. Companies also often use redundant array of independent disks (RAID) hard drive systems. This is known as a live backup feature that backs up data as it is written. It has many hard disks that are interconnected that contain all the data. This is extremely useful and can also minimise downtime as it will allow the network engineer to restore the data very quickly. User accounts are used in every organisation as employees will need their own personal accounts to access the network and do their jobs. Every user has a unique personal username which makes it easier to identify each user. All users have the same privileges and must change their passwords often due to security reasons. The network admin will have control over the user accounts and can help employees if they forget their password. Users are usually put into groups of departments or services, for examples, Sales or Accounting. This makes it much easier for the network engineer to control each section of the network and can make changes to a group of people with ease. It is also more organised as people that do the same job will be in the same group. This gives everybody the same privileges and allows them to do their jobs efficiently. Logon scripts are activated once someone logs onto a device in a network. It is very useful as it automatically carries out tasks. Scripts are developed in the command prompt with scripting languages. This is useful as it can utilise commands such as ipconfig and look through commands as soon as the device is up and running. If the network engineer had to manually carry this out every single time it would take very long. Virus scans are crucial in any network to ensure network safety and efficiency. The network admin will run virus scans regularly on all devices on the network. If any viruses are found they will be logged in the virus software to clearly outline how many viruses have occurred since the last check. This is important as the network admin will be able to see viruses occur often which could show a weakness in the network. Virus checkers in a network will differ from the ones in a home network. On a network when a virus is encountered, the network admin will be notified and will investigate the virus. Checking for viruses is very important as a virus can jeopardise data within the network which is why the administrator must be notified immediately and eliminate the threat. Whereas at home, people use an antivirus software and carry out scans to check their devices. Frequent file clean ups are necessary to clear up space and organise data correctly. Any temporary files or old files should be deleted as it will not be necessary on the drives. This will free up space for files in the future which will be useful. The network admin should frequently check if the users have enough space on their drives to ensure efficiency. Task 4: Network Security Policy Security is essential in every network, especially for Phoenix. There is a range of security policies that will need to be implemented in the network for it to run efficiently and securely. These procedures will keep sensitive information safe and protect client data. Below I have outlined and covered the security policies that are necessary in any network for it to run smoothly. A firewall is a program that prevents viruses from entering the network. Firewall management is crucial in a network to stop attacks coming into the network. There are many types of attacks that can occur in a network, one type of attack is known as an access attack. This is when a stranger tries to gain information from the network and take control of the network. Another type of attack is a DoS (Denial of Service) attack, this type of attack will affect the systems in the network. This can block employees from accessing their systems which is a setback for the company. ACLs are used to permit or deny access to users throughout the network. The network admin may want a specific group of users to access resources, by adding ACLs the admin can reject access from other groups within the network. This is necessary for Phoenix because it will prevent outsiders from accessing data within the network. The devices in a network must be protected as they contain sensitive data. Hardening is making a device secure and reliable. There are many ways of hardening a device, one way this can be done is by enabling antivirus protection on each device to prevent it from viruses and malware. Another way a device can be more secure is having ACLs put on them, this will stop people from entering them without permission. Securing your devices is crucial in every network to ensure efficiency and reliability. If the device goes down or is hacked it could risk data theft which will setback the company, in this case, Phoenix. This involves reviewing the security policy frequently to ensure that the company is up and running securely. All the security policies are important to the network as they keep it up and running safely. A record should be kept of any threats so the network admin can look out for issues that have occurred in the past, and ensure that they dont happen again. By reviewing all the policies, it will allow the network to run at its optimal performance. Users should have the right permissions on their accounts. The network admin should check the users accounts once a week at least in case some users have rights that they arent supposed to have. This is important for Phoenix as it keeps the company organised and ensures safety by permitting the correct rights to users.

Thursday, September 19, 2019

Transformation Essay -- essays research papers

America is ever changing. Over the centuries it has transformed in many ways. There has been an increase in immigrants, especially Hispanics, which has caused a transformation of both language and culture. Richard Rodriguez in his book Brown: The Last Discovery of America, and in other essays has brought his views on these matters and presents brown as a new way of describing America. Brown as color; as impurity; as language; as America. Richard Rodriguez is a writer who is artistic, and has an idealistic way of recounting things. In his essay â€Å"Late Victorians† he writes how a woman jumps off the Golden Gate Bridge in San Francisco. He describes it as â€Å"†¦before she stepped onto the sky. To land like a spilled purse at my feet,† (Encounters, 496) He compares the woman hitting the ground as a â€Å"spilled purse.† When you think of a spilled purse you don’t think of tragedy, so his comparing this insignificant incident of a purse hitting the ground to the death of a woman catches you off guard. Rodriquez says it in such a tranquil manner that the tragedy seems to be unrealistic. He again shows romanticism somewhere else in the essay: On a Sunday in summer, ten years ago, I was walking home from the Latin mass at Saint Patrick’s, the old Irish parish downtown, when I saw thousands of people on Market Street. It was San Francisco’s Gay Freedom Day parade-not marching backs. There were floats. Banners blocked single lives thematically into a processional mass, not unlike the consortiums of the blessed in Renaissance painting, each saint cherishing the apparatus of his martyrdom. (493) Rodriguez’s comparing the parade with religious allusions makes it more glorious. He compares the parade of floats and banners to a â€Å"processional mass.† He satirically portrays gays as saints just as he is coming from church, which considers homosexuality as a sin. He is basically beautifying the parade. He romanticizes to capture your attention and to bring you into his world. He wants you to see things as he sees them. He wants to â€Å"defy anyone who†¦say[s] what is appropriate to my voice† (Brown, xi).   Ã‚  Ã‚  Ã‚  Ã‚  Rodriguez, in his essay â€Å"Peter’s Avocado,† expresses â€Å"[b] rown as impurity,† (Brown, 194). This brown is not brown as color but as something â€Å"mixed, confused, lumped, impure, unpasteurized, as motives are mixed†¦Ã¢â‚¬ (â€Å"Peter’s Avocado†, 197). However, brown can be... ...of the United States not for the battles and politics, but for the transformation and complexity of language that occurred through the centuries. â€Å"I eulogize a literature that is suffused with brown, with allusion, irony, paradox-ha! -pleasure,† (Preface, xi). With disconnected allusions, metaphors, and unrealism Rodriguez not only conveys his ideas throughout his essays but also is able to show us part of himself as a writer. He respects people’s role in society. He treasures how assimilation can change a culture. He has a passion for brown for converting color and race. He loves language for it’s continuous changes that it has been through over time. He values transformation, whether it is of color, culture, language, or a nation. Work Cited: 1.  Ã‚  Ã‚  Ã‚  Ã‚  Rodriguez, Richard. â€Å"Late Victorians,† and â€Å"The Achievement of Desire.† Encounters: Essays for Exploration and Inquiry. 2nd ed. Ed. Pat C. Hoy II and Robert DiYanni. New York: McGraw-Hill, 2000. 475-492, 493-505 ----. â€Å"The Triad of Alexis de Tocqueville,† â€Å"In the Brown Study,† â€Å"The Prince and I,† â€Å"Peter’s Avocado,† and â€Å"Hispanic.† Brown: The Last Discovery of America. New York: Penguin Putnam Inc, 2002.

Wednesday, September 18, 2019

Two Kinds of Love in Movie Casablanca Essay -- Love Casablanca Movie F

Two Kinds of Love in Movie Casablanca In the movie Casablanca, directed by Michael Curtiz, two different kinds of love are exposed. The love relationship between Ilsa Lund and Rick is a more passionate relationship while the one between Ilsa and Victor Laszlo is more intimate. Love is composed of different feelings and because of that it can be expressed, as seen in Casablanca, in different ways. â€Å"The Intimate Relationship Mind†, a text by Garth J. O. Fletcher and Megan Stenswick, helps support that claim providing a scientific background on how love is shaped by those different feelings. It says that â€Å"love is composed of three distinct and basic components that each represent evolved adaptations; namely, intimacy, commitment, and passion† (Fletcher and Stenswick 73). Those three components help shape different kinds of love. The first love relationship that is portrayed in Casablanca is the one between Ilsa and Rick. It is explicit that the relationship is a passionate one and that what they had was more of a fling, an affair, than a relationship that would last forever. That can be noticed by the lyrics of the song â€Å"As Time Goes By†, which is the theme song for their love: You must remember this, A kiss is just a kiss, A sigh is just a sigh, The fundamental things apply, As time goes by. The lyrics basically mean that their love will not last long. That they may kiss and sigh as time goes by but one day it will be over and become a memory. There certainly is passion in Rick and Ilda’s relationship but it lacks the two other components stated by Fletcher and Stenswick: intimacy and commitment. That is shown when Rick has his flashback from Paris. Rick ask... ... That is how Ilsa and Laszlo’s relationship was shaped, with higher levels of intimacy and commitment, and lower levels of passion. It is a relationship that would typically last. Casablanca does an excellent job in portraying two different kinds of love: a passionate love and an intimate and committing love. Passionate love is unavoidable and a part of life but people need to accept that a love based solely on passion does not last. An intimate and committing love is what will persevere and is what they need to hold on to. In the final scene Rick and Ilsa accept that their moment is gone, that they will be separate for life but â€Å"will always have Paris†. She then moves on to continue her relationship with Laszlo. That is the main message in Casablanca: that you need to accept that passionate love doesn’t last and embrace intimate and committing love.

Tuesday, September 17, 2019

Softball Paper

The History of Softball PHEC 202 Table of Contents 1. History of Softball 2. How to play Softball 3. Equipment needed to play a game. 4. Diagram of a Softball field 5. Bibliography Page 3 Page 4 Page 5 Page 6 Page 7 Softball is one of America’s favorite pastimes. Softball is now a very popular game that originated in Chicago, but it didn’t become popular over night. The game is said the be invented by a man named George Hancock, by him creating this game it has now become of the most played games in America. In this essay I will discuss the history of softball, the basic rules, and the necessary equipment needed to play the game.Softball was started on Thanksgiving Day in 1887. It all began when a group of men gathered in a gym to hear the score of football game, after the score was announced and all bets were settled one of the men threw an old boxing glove at another man who hit it with a pole. George Hancock, said to be the inventor of the game took the boxing glove and tied it so it would look like a ball, took chalk and drew a diamond on the floor, broke a broom handle to use it as a bat and began to play the first game of baseball. This was also the beginning of softball.Hancock’s game was a smaller version of baseball and was played indoors. Within a week’s time Hancock created an oversized ball and a bat with a rubber tip that he used to play the game. He also returned to the gym to make permanent foul lines on the floor. He then wrote the rules and named the game Indoor Baseball. This new sport quickly became a hit and became international. In the same year, 1897, the Indoor Baseball guide was published, explaining the rules of the game and how to play. Ten years later the game was moved outdoors.It was then known as indoor-out door. This game also caught on very quickly and a set of rules was published for this version of the game in 1889. Although Chicago is the birthplace of this game through the years it took on some mod ification in Minneapolis around 1895. It is said that a Fire Department officer by the name of Lewis Rober Sr. used his versions of the game to keep is men in shape and occupied. It is also said that he had no prior knowledge of Hancock’s version of the game. Rober’s version of the game was played in a vacant lot next to the fire house.In 1896 Rober was moved to a new unit and in charge of coaching another team. This team called themselves the kitten and in honor of their name the game was called Kitten League Ball in 1900. The name was later shortened to Kitten Ball. In 1895 the women’s softball team was formed in Chicago at West Division High School. Although the team was started they did not start competing until1899. As the game grew more popular more people began to pay more attention the women’s game and in 1904 the Spalding Baseball Guide was published.This publication of the rules dedicated a substantial amount of the book to the women’s ga me of softball. In 1933 there was the Chicago National Tournament. This was the first tournament where both male and female champions were honored in the same way. This tournament help lead to the International World Championships in 1965, by allowing women to compete in such tournaments this helped this sport to become international and helped it move on to the Pan-American Games and the Olympics. Softball at this time was now a professional league and contracts ranged from $1,000 to $3,000 dollars per year.In 1980 due to financial hardship the league was broken up. Although the league was broken up softball is still a popular game today. There is now an Amateur Softball that registers more than 260,000 fastpitch softball teams and slowpitch is gradually growing. Although compared to baseball, softball is simple to play and is also played on a smaller scale. There a 9 players on a softball team. The playing field it is divided into the infield and the outfield. The infield is the p ortion of the field which is connected by the bases. Each base is set between 55 and 65 feet apart.When the bases are joined they take on the shape of a diamond and the infield is considered the portion inside the baseline. Outside the baseline but inside the playing field is the outfield. While in a game if the ball goes outside the 1st or 3rd base it is considered a foul ball. If this occurs the runner cannot go to the next base and the batter gets another chance, however if the ball is caught in the air outside the line the batter is then consider out. An official softball game has 7 innings. An inning is when both teams has a had a chance to bat.This is how a game of softball is played. What makes softball different from baseball is the pitch. In softball the ball must be thrown underhand. In order to pitch the pitcher must have both feet on the pitchers rubber and both hands must be on the ball at the start of the pitch. When the pitcher throws the ball it’s going to bat ter. When batting, the team must have the same order of batters throughout the entire game. The batter will stand in the batter’s box which is the box marked with chalk near home plate that a batter must stay within while batting. The batter is onsidered out when and if three strikes are called, a fly ball is caught, or if the batter does not stand in the batter’s box. A strike occurs when a ball is swung at and missed or is called when the ball enters the strike zone and is not swung at all. The strike zone is the area between the batter knee’s and armpits. A fly ball is a ball that is hit in to the air in the infield. If any of these things occur the batter is then out. If that batter hits the ball the next step is running. When running the runner must touch each base. Runners can only over run one base and can be tagged out if they are not on the base.While on base the runner can only run when the ball leaves the pitchers hand. If the runner is on base when a fly ball is hit and caught the runner must remain at their original base and cannot move on to the next base. While running all batters that have made to a base must stay in that order when returning to home plate. Stealing bases are not allowed in softball. A runner is considered out if they are tagged out before reaching a base, if the ball gets to 1st base before the runner, or if the runner runs more than three feet out of the baseline to avoid being tagged out.These are the rules that runners must follow. In order to play this game the following equipment is needed: a bat, when standing next to bat that you are going to use the bat should come to your wrist and it should be light enough for you to swing comfortably. Also while batting a batter may use a batting helmet to protect their head while up to bat. Next is the ball, softballs range from 11 which are used by children ages 10 and under, to 12 inches which are used by everyone ages 12 and above. After the ball is the glov e.The only positions that have a specific glove designed especially for their position is the first base men and the catcher, all other use the same type of glove depending on which hand you catch with. If you use you right hand the most you would put the glove on your left have so you would be able to throw with your right hand and vice versa for the left hand. In this essay I have explained the history of softball, how to play the game and the necessary equipment need in order to play the game. Although the were some hang ups that could have stopped the growth of softball its popularity continued to grow.Softball is still a popular game with millions of people who still play it today. [pic] Bibliography Amateur Softball Association of America (ASA). (2012). Amateur Softball Association of America (ASA). Retrieved October 7, 2012, from http://www. asasoftball. com/about/asa_history. asp History of Softball. (2000). History of Softball. Retrieved October 7, 2012, from http://www. so ftballperformance. com/softball-history/ Lynch, W. (2011, May 26). Rules on How to Play Softball. LIVESTRONG. COM. Retrieved October 7, 2012, from http://www. livestrong. com/article/426838-rules-on-how-to-play-softball/