Wednesday, July 31, 2019

Edgar Allen Poe’s Tribute

The poem â€Å"Annabel Lee† by Edgar Allen Poe is written to tell the story of the speaker's greatest love. The speaker and Annabel Lee loved each other with â€Å"a love that was more than love† until she fell ill and died (9). The speaker blames the angels for killing his darling and proves his love for her by attending her graveside every day for the rest of his life.One way the speaker demonstrates his love is by describing their home (the setting of the poem) as a â€Å"kingdom by the sea† (2). This means the speaker sees himself as royalty because the love he and Annabel Lee share makes him so incredibly wealthy and powerful. This power and wealth was so great, in fact, that â€Å"heaven coveted† the love about which Edgar Allen Poe wrote (10). The angels were jealous of this love being shared on earth, which was apparently more wonderful than anything they had experienced in heaven as angels. The use of the word â€Å"coveted† implies a darker meaning. This was not the simple jealousy of a teenage girl. The angels were committing a sin, breaking one of the commandments of their Divine Master by coveting the love between two of His children. Finally, the speaker's grief at her death further implies the depth and strength of their love. It is logical that the greater the love, the greater the grief; the inverse is also true: the greater the grief, the greater the love. Instead of merely being laid to rest in a coffin or a grave, death â€Å"shut her up in a sepulcher† there â€Å"by the sea† (19, 40). Sepulcher brings such dark connotations that we can almost see the speakershrouded in black after her death, mourning as deeply as the seanext to her tomb.Edgar Allen Poe contributed to the extremity of the poem by using a tone of reverence and pride. This is not some silly poem about puppy love. The love shared by Annabel Lee and the speaker was serious, and seems to be one we can only refer to with a sense of sobriety and admiration. In line 28, the speaker refers to his pride by comparing himself to those older and wiser, saying that hehad experienced a love that â€Å"was stronger by far† than anything those older and wiser had experienced. The

Tuesday, July 30, 2019

A Good Leader: Odysseus and Gilgamesh

Strength, determination and dedication are a few examples of characteristics, which a leader should possess. Characteristics of a good leader may vary in the eye of the beholder, however, I believe that overall there are a few qualities that are critically important. Throughout a person’s life, the experiences they endure shape them and build them into an individual. Like the lugals in Mesopotamia, it is a leaders obligation to protect and serve. In the Mycenaean civilization the Wanax stood at the top of their social ladder.In the Odyssey, Odysseus would be a Wanax because he owned an independent walled kingdom or palace. Both Odysseus and Gilgamesh were looked up to as leaders. When asked if they were successful leaders, I was a bit stuck. After some thought, though, I came to the conclusion that I believe both Odysseus and Gilgamesh were successful leaders. They were not always successful leaders, but their adventures and experiences molded them into reliable men. Our first glance of Odysseus is when Telemakhos speaks of him to Athena. (Odyssey 8-9).He explains that he would rather have a father who is happy and growing old in his house rather than one with a mysterious and dangerous life. This is the first example of why I believe Odysseus started out as a bad leader. Although he was off fighting, against his wishes, he lost contact with the people he cared about the most, and fell off the radar. I believe that, as a leader, he should have been able to somehow get into contact with his family and inform them that he was okay. When comparing our first impression of Gilgamesh to Odysseus, we see someone who is extremely different.Odysseus had a loving family and a loyal wife. In contrast, Gilgamesh was selfish and achieved the glory he thought he deserved. He was on the hunt for immortality and in doing so, abandoned his city or Uruk to travel with his friend Enkidu. A successful leader should never abandon his or her people. One example that contrasts Odysseus’ quality of leadership can be seen by looking at his crew. None of his members survived. A successful leader should always lead, protect and receive respect from their crew and in many ways the members of his crew were disobedient.When Odysseus and his crew traveled to the island of Helios he distinctively said to his men not to touch the cattle (Odyssey 219-220). When Odysseus fell into a sleep, Eurylokhos, Odysseus’ main member of the crew, convinced the men to kill one of the cattle for food (Odyssey 221). Disobedience shows disrespect, and when the members of your crew do not listen to what you say it shows that they do not take you seriously. A second example showing how Odysseus could not control his crew is the bag of wind (Odyssey 166).I believe that if you are a successful leader, you should be able to control all of your people, namely your crew. Although Gilgamesh does not have a crew, he proves that he lacks the characteristics of a good leader in a few instances. Gilgamesh and Enkidu steal trees from the cedar forest, which is forbidden to mortals. This is prime example of how Gilgamesh does not care about anyone else but himself. He is disrespecting the Gods by entering the forest and going even further by cutting down the trees.During this endeavor they also kill Huwawa, the monster that guards the forest. At first Gilgamesh flees when he first sees the face of Huwawa (Gilgamesh 26). Gilgamesh fleeing from the face of the demon shows that he was afraid, and no leader should ever be afraid and show it. Another example of Gilgamesh lacking the qualities of a leader is when he kills the Bull of Heaven. The goddess Ishtar was in love with Gilgamesh and wanted to be with him; when she asks him to be her husband he rejects her and she goes straight to her father and mother, Anu and Antum (Gilgamesh 29-32).Ishtar has her father send the Bull of Heaven down to kill Gilgamesh, however Enkidu and Gilgamesh conquer the Bull of Heave n and kill it. The council of Gods were enraged and demand that Enkidu must die in order to pay for the deaths of both Huwawa and the Bull (Gilgamesh 37-38). Betraying the Gods enough for them to wish death upon Enkidu shows that Gilgamesh was certainly not being a respectful mortal, let alone leader. Odysseus was a very sneaky and cunning man. He was able to defeat many monsters by out-smarting them.This was not always the best way to go about achieving victory. Odysseus came upon the Kyklops while on his journey with his crew. They were stuck in his cave, and he thought of a sneaky plan to get away. Odysseus and his crew took a large pole and poked the Kyklops in the eye. Right before they did this, however, Odysseus told the Kyklops that his name was Nohbdy. When the kyklops ran out of his cave bellowing in pain his fellow Kyklops’ asked who did this to him. â€Å"Nohbdy, Nohbdy’s tricked me, Nohbdys’s ruined me† (Odyssey 157) was the Kyklops’ r eply.This was extremely smart and cunning, and Odysseus would have been able to get away safe and sound. The unfortunate part occurred when Odysseus decides to brag his victory and announce his real name to the Kyklops. A leader should not feel the need to brag about victories that he or she has earned. Every leader knows that they are capable of defeat, and bragging is never something that a successful leader should do. As you can see there are several examples proving that Gilgamesh and Odysseus were not successful leaders from the start and throughout their journeys.The realization comes at the end of both novels where I believe the leaders made a change in their path for the better. When Odysseus and Telemakhos meet up they know that they must now defeat the suitors and gain the palace back as their own. Odysseus was disguised as a beggar so that he was able to go into the palace and get ready for the defeat of the suitors. You could already tell that he was starting to change w hen one of the suitors insulted him on being a beggar, and not being worthy. Normally Odysseus might have revealed whom he really was in order to prove his excellence, however he did not seem phased by it.From there Telemakhos and Odysseus defeated all of the suitors and claimed that palace, as it should be. Odysseus was back where he belonged, and ready to rule his people like he should have been doing from the start. Gilgamesh on the other hand was searching not for his way home, but for immortality. After Gilgamesh’s long journey he comes to the realization that death is inevitable. He learned from his talk with Utnapishtim that immortality cannot be earned when you are trying to get it. In his case, Utnapishtim was not looking for immortality when he built that ark.He was building the ark because he was told to and immortality was awarded to him as a reward. Death is something that cannot be avoided, and that he should just learn to accept that. Gilgamesh then finally rea lized what he had done to his people. Because he was so wrapped up in the glory, fame, and immortality he was trying to reach he gave up on something that was a part of him. Gilgamesh knew at that moment that he needed to travel back to Uruk and rule his people the way that they deserved to be ruled. In my opinion I believe that the end of both men’s journeys is the most important part.Yes, they were definitely not successful leaders for most of the story however the realizations at the end meant the most. When they realized that they let their people down they knew they needed to change. It shows that they will be there for them from now on, and be the best leaders they can be. I also believe that with the obstacles they over came along the way lessons were learned. Every champion athlete has to over come bad competitions, injuries, and bumps in the road in order for them to be at the top of the podium, and a successful leader has to do the exact same thing.

Monday, July 29, 2019

Do Ex-Military Make Good Police Officers Essay Example | Topics and Well Written Essays - 4000 words

Do Ex-Military Make Good Police Officers - Essay Example The essay "Do Ex-Military Make Good Police Officers?" examine this question identifying which factors will contribute to success within the two types of organizations: military and police. There are certainly parallels between the attributes which make for a successful career in either the military or the police force. However, a successful military record does not necessarily equate to a successful career in law enforcement. The general attributes, such as honesty, integrity, and discipline are commonly valued in both career positions. However, some of the skills in the second group are not necessarily valued in the military. For example, the ability to observe and remember detail has little to do with many functions of military personnel. The ability to assess situations and decide on a course of action is also of little value in many military positions, where instant obedience to the rule might be more valuable. Another factor in police work is the size of the groups, which are generally much smaller than those in the military. Many military positions seek to create groups of very similar people when large groups of very similar people are needed for power. Most groups on police forces are more demanding of dynamic interaction and cross training of team members. The behavior of a military group is expected to be extremely disciplined and nearly thoughtless after orders are given. The groups need to work like well-oiled machines. Individual thought would actually get in the way.

Sunday, July 28, 2019

Keurig's Decision to Implement DRM Technology In Future Coffee Brewers Essay

Keurig's Decision to Implement DRM Technology In Future Coffee Brewers - Essay Example This research will begin with the statement that with Keurig planning to expand its coffee brewing business to new levels, they have seen protecting their digital rights as the main step to move forward as they keep off the third party from their operations. With competition also in place from its main rivals Treehouse foods and Rogers family company, how well Keurig survives with its new technology is yet to be seen as both competitors have sued Keurig for unfair competition for creating a monopoly. The great coffee war in 2014 is coming with Keurig taking that direction, third parties will not have a way in the coffee market as Keurig plans to take a monopoly position as they will not use their cheap coffee pods in the new machine. Keurigs chief executive officer claims that this will only boost the performance of their coffee market, meaning that the consumer will suffer from the increased coffee prices while innovation takes its new entry to the coffee market. This battle has eve n been transformed into litigation as Treehouse food sued Green mountain coffee the parent company of Keurig back in February this year claiming that Keurig has been involved in the unfair competition, in the market by creating a monopoly environment that would see the company drive many form the coffee brewing market. Another war is also coming as Rogers’s family company also considering litigation on the same. Jon Rogers claims that if Green mountain coffee is allowed to introduce the Keurig 2.0 machine with digital restrictions in it, this will amount to restraint in trade, and this would mean that Keurig 1.0 was the only brewer in the market. How this plays in the next coming months will be interesting to follow as the coffee giants battle it in the corridors of justice. The Keurig 2.0 with digital rights management will block unlicensed K-cup alternatives used by coffee brewers; it is of great importance to any big and historical company to protect its heritage by embrac ing the current technology.

The effect of atmosphere on customer perceptions and customer behavior Essay

The effect of atmosphere on customer perceptions and customer behavior responses in chain store supermarkets - Essay Example This research will begin with the statement that in the current age of globalization and the increased diversity of products, the shopping behavior of most people has changed significantly. Gone are the days when a shopper would simply walk into his or her favorite store and purchase what he or she likes. However, the shopping and customer world has changed, with many more options available. Even the option of going to the store or supermarket has been expanded to include online shopping and deliveries. In some ways, the shopping experience has evolved with more customers focusing less on shopping and more on the experience of shopping. Retailers also have better options in their products, prices, as well as store spaces, allowing for greater diversity in their options. Retailers are also eager to gain brand loyalty from customers as an added business advantage. In order to secure such loyalty, retailers have set out to improve the environment in their stores. They believe that with an enticing environment in their stores, they would be able to promote positive emotions and feedback from customers and draw them into the stores. Factors which refer to the store’s environment can impact on customer feelings and experiences, also affecting their purchasing behavior, their level of consumption, how much they would spend, and their satisfaction with the experience. A good experience while shopping in a store would likely prompt a repeat visit in the future; it can also facilitate spending in the store, including impulse purchasing. Currently, profits from chain store supermarkets are not too remarkable, and concerns have been raised on how to provide pleasant and inviting shopping experiences for customers in order to increase customer spending as well as increase their time of stay in the store.  

Saturday, July 27, 2019

Bioterroist threat Essay Example | Topics and Well Written Essays - 500 words

Bioterroist threat - Essay Example Terrorists value biological weapons due to their ability to cause mass panic among the people. Moreover, such threats cause massive disruption in the operation of a country making terrorists achieve their target. Bacteria are free-living microscopic organisms that are known to occupy extreme habitats. These organisms have no cell membrane and most of the other organelles found in ordinary cells. This makes it complex to identify effective agents or medicine to deal with such organism. Bacterium such as anthrax are highly contagious and, hence their application in biological warfare. Moreover, anthrax causes high mortality due to its low incubation period. Anthrax bacteria also transform into spores to survive extreme condition such as high temperatures, extreme radiation, and lack of water or nutrients. Such characters makes the bacteria indestructible and, hence an effective warfare agent. Viruses are cellular organisms that thrive as parasites in other living cells. Unlike bacteria and fungus, viruses are not considered living organisms since they lack nucleic acid replication mechanism that is present in other single celled organisms such as bacteria. When viruses occupy a living cell, they interfere with normal cell metabolism, causing death of the cell. Infected cells releases a protein compound knows as Cytokines in response to the attack. This agent is responsible for the resultant symptoms. However, it is difficult to differentiate between viral and cell processes. This makes it difficult for scientists to develop anti-viral medicines. Viruses are effective agents of biological terrorism since they are easy to transport and disseminate (Block, 2001). In particular, viral agents can be transported in aerosol form making them attractive to terrorists. Chimera virus is potential viral agents for biological weapons. The viruses are generated by injecting genetic ma terials of other viruses

Friday, July 26, 2019

Diversity and Inclusion Essay Example | Topics and Well Written Essays - 2500 words

Diversity and Inclusion - Essay Example These tests are administered in the student's primary language, with more than one type of test given for each disability tested. (LD Online, 2010) When the disability has been isolated, then the third component provides for an IEP, an Individualized Education Plan. This is an organized approach to providing targeted special education to meet that student's specific needs. It is formulated with a team of professionals, including the parents; they meet annually to discuss process and areas for improvement. The IEP must contain certain parameters: current level of academic achievement, annual and short term goals, frequent evaluation using objective criteria, the list of special education services and environment required, the extent of mainstreaming with explanations for lack of mainstreaming, the date for commencement of services as well as the estimated duration, and an annual progress report updating achievement of goals. (LD Online, 2010) The fourth component states that children should be educated in the least restrictive environment. This means that for the most part, handicapped children should be with their non-handicapped peers unless special circumstances prohibit it. There are program aides provided to many mainstream classes to assist children with special needs to enable them to remain in the classroom with their peers. Occasionally, behavioral issues require a student to be removed to allow for stabilization, followed by a return to the classroom when the student is able. (LD Online, 2010) The fifth component is one of due process with rights for the parents and child with regards to accountability and fairness. It contains the following provisions: 1) confidentiality regarding both the family and... This essay underlines that every person, born with disabilities, has the right to receive an education that will help him master the surrounding environment and allow him to make a contribution to the world at large. The Individuals with Disabilities Education Act of 2004 is the latest comprehensive package which provides not only educational services, but also supportive technology and services to assist children in retaining educational curriculum. In addition to the standard learning disabilities, children with traumatic brain injury, autism and benign mental disorders, and, visual and auditory impairment are now provided services under this legislation. A team of highly qualified professionals partners with the child's parents to monitor progress and assure that quality services are provided for the child. These children are no longer forced to live a life of mediocrity because their needs are met early in life during the cognitive development stage in order to be the most effect ive remedy for prevention of further disability. When a professional suspects a child may have a disability, they must attempt to resolve the issue without involving the special needs team. The parents are also a part of this team. Sometimes just talking with the child and parents provides insight into the situation allowing them to get alternative relief. If at least two alternative approaches for instruction in the regular classroom do not impact the situation, then the child may be referred for a special needs evaluation.

Thursday, July 25, 2019

Current Event Essay Example | Topics and Well Written Essays - 500 words - 1

Current Event - Essay Example s to achieve a two-fold objective, to wit: (1) to provide explanations on the issues discussed within the article; and (2) to demonstrate the ways that the article is related to the course material. Terkel noted Pelosi to emphasize that Republicans were advancing three legislations that limit or restrict access to abortion by (1) preventing the use of taxpayers’ money to fund abortion-related services; (2) denying tax credits to employers or business establishments that allow health coverage of employees with abortion access; (3) denying â€Å"federal family-planning funds under Title X to groups that offer abortion access† (Terkel, 2011, par. 4); and (4) allowing hospitals to turn away women who opt to terminate pregnancy even for the purpose of saving lives (Terkel, 2011, par. 5). The concerns raised by Pelosi are related to the issues on women’s health, particularly on reproductive health and reproductive rights (Kirk and Okazawa-Rey, 2004, p. 173). The topic of abortion is still a controversial issue when taken within a global perspective. Not only to take into account the issue seen as barred and illegal in predominantly Catholic nations, the issue has been monitored to provide risks to women (Malter and Wind, 2012). Accordingly, â€Å"research from WHO shows that complications due to unsafe abortion continued to account for an estimated 13% of all maternal deaths worldwide in 2008; almost all of these deaths occurred in developing countries† (Malter & Wind, 2012). The concern on Pelosi, as disclosed in the article focused primarily on the legislation that sought to deny federal family planning funds to groups that offer access to abortion services. As averred, â€Å"I cant believe that everybody who is anti a womans right to choose is anti-birth control and contraception and family planning† (Terkel, 2011, par. 7). Aside from denying rights to avail or restrict access of services to abortion, these legislations actually aim to limit the funds to be

Wednesday, July 24, 2019

Acid Rain Part I Essay Example | Topics and Well Written Essays - 250 words

Acid Rain Part I - Essay Example According to the Federal Monitoring Data report, Pennsylvania, since 1987, has been ranked as the first state that experiences an excess of the acidic rainfall. However, the level of acidity varies from place to place that are located in Pennsylvania. The highest acidic content in rainwater is found in Leading Ridge located in Huntingdon County. The average rainfall pH is 4.08. This particular pH value is considered to be 33 times more in concentration of acidity than the normal rain water that is unpolluted. Any value that is below the pH value of 7 is considered to be acidic. The lower the value the more the acidity of the rainwater. The normal rain water is considered to have an average pH value of 5.6 (Park, 2013). Lewistown Pennsylvania is therefore affected by the coal-fired power plants, large number of automobiles and factories that emit pollutants to the atmosphere. The resultant effect of these pollutants in the atmosphere therefore form acidic rain, fogs, snow and other particulate matter (Park,

Tuesday, July 23, 2019

Small Business Management Assignment Example | Topics and Well Written Essays - 750 words

Small Business Management - Assignment Example Studying the data of the online sales of large chain stores like Gap cannot provide an accurate assessment of the situation faced by startup ventures in the E commerce. A better approach would be to case study analysis on small vendors who are operating in the same kind of clothing categories as the planned clothing store and see the strategies they have picked out which led to either success or failure. Further secondary information can be gathered from analysis done by research companies on E-commerce, expectation of profitability for online clothing stores and such topic for a prescribed fee. Some primary research can also be conducted either on the marketing strategy or the tastes and preferences of customers through interviews or surveys. These would indicate what type of apparel are the customers looking for, what kind of service they expect from an online store and help judge how far can you satisfy their need. The research is an essential component of designing a suitable mar keting strategy and more extensive research would steer the strategy towards success. Any cost incurred in conducting the research should be considered as investment. 2. In designing the marketing strategy the first question is what kind of segmentation the clothing line is going to target. After the strategy has been identified, it becomes easier to organize resources and marketing campaigns towards your core customers increasing the chances of customer acquirement and retention: Unsegmented strategy: Followed by clothing stores with either a uniform range of clothes (plain Jeans or category like headwear) or with large margins available to cater to the masses; their customers could be from all walks of life but some characteristics could be as follows: Availability of disposable income not necessary. Single individuals or Families in need of regular new clothes Comfortable with social outings (like shopping) Have an interest in advertising, can be persuaded for a trial purchase Mu ltisegment Strategy: In this strategy, the clothing line would be designed to appeal to certain different kind of segments only and these are the customers that will be targeted by the company. Considering a clothing line with economical, casual clothes as well as trendy clothes for ‘tweens’ there can be two different groups of customers targeted: A. Economical, casual clothes B. Trendy clothes for ‘tweens’ Middle aged males or females Most probably part of a family unit, with kids Working full time or part time Have a casual social life, with kids or family Mid tier income level Emergency savings but low amount of disposable income Time conscious Would be persuaded to purchase clothes if it is convenient and seems like a good bargain Children and teenagers from ages 11-16 Coming for mid and upper tier income families Have a monthly allowance Allowed to choose own clothes Interested in pop culture and latest fashions Concerned about their image their clothi ng portrays Parents are primary earners May be able to use ‘nag factor’ to persuade parents Single-segment strategy: Also known as ‘concentrated’ or ‘niche’ strategy it is adopted by firms with a low amount of recourses to spend on marketing or a unique product which can only be marketed to a single segment of customers.

Monday, July 22, 2019

Summer for a Camp Skyline Ranch Counselor Essay Example for Free

Summer for a Camp Skyline Ranch Counselor Essay When the realization that my final days of high school were vastly approaching, I began to ask myself what I would do with my life. From that point, the thoughts began to creep in of what things I could do to help better prepare myself for the future that was being pushed upon me. Seeing that I have known since a young age that I wanted to be an educator, my exploration of a summer job that would involve surrounding myself with children began. After endless hours of internet searching for the job that would best suit me, I discovered a Christian summer camp that was very much a place that would cease to leave my mind for the next few days. Days passed by and prayers were sent up when I finally came to realize that this was the job I needed. Working at a Christian summer camp would be a great job for any young person because it is a way to spread God’s word with young girls, push limits and set new goals, and it exposes the counselor to see what teachers and educators face on a day to day basis. God’s word always needs to be shared with everyone but more especially to the youth. God has always been a huge part of my everyday life. Finding a place where I would be able to share this joy was a priority I had. Camp Skyline was undeniably the place for that. Each night we would sit around a campfire just to hear the songs of the praises to God’s word. Voices as sweet and soft as honey would travel through the mountain air as if a bee on a summer day. Beneath that sound would be the faint crackling of the fire that blazed before us and faintly gave light to each face. On Sundays we had â€Å"Skyline Church.† Everyone was to wear pure white on this day. Upon entering church I would see girls of all ages running around in white dresses that were catered to fit each of the hundreds of girls. During those next few hours praises would be lifted and hearts would be led to God as if a lost child in a store searching for a parent that would soon be recovered. Some knew where they were being led while others only knew of the joy that was overtaking the friends around them. Blessings would overflow in my heart  after seeing such tiny innocence find something that would forever change every life that heard His call. Pushing limits and setting new goals is a necessity to being a successful person. Challenging myself to step out of my comfort zone was definitely an ambition I had for the summer. My first class to assist in would be ropes. There I would send girls off of zip lines, unusually high swings, and belay girls to their destination at the top of the trees. The smell of sap growing on these large oak trees began to surround me as if cake escaping a bakery and surrounding the streets. My heart sank as I was assigned to be in the tree to send the girls off of the zip line. As I was creeping my way up this never ending oak tree I realized that this is the adventure I had wanted. Capturing the top, I looked out to see the sun gleaming down and beautiful blue skies surrounding as if God himself had spent His morning painting that moment for me. Girls began to climb up and jump off with fright not being a possibility of thought. To my astonishment, panic had left my mind as well and peace had taken the place of that. By the end of those hours, I was just as eager to jump out of that tree only to be caught by a thin cable attached to black rope like a dog on a leash. Teaching is very much underestimated much like being a camp counselor can be as well. Teaching is a desire that I have had for as long as I can remember. Being a summer counselor is very much like being a teacher in many ways. This job consists of continuous hours of helping children obtain a goal they have set for themselves and sometimes just being that comforting hand in a time of need. Encouragement and perseverance are the keys to succeeding in this job. When I walked around camp I could feel the desperation of achieving a task creeping through the air as if a robber in a bank. Much like teaching, counselors must give the reassurance that many children search for to help them succeed. â€Å"You can do it!† is a phrase that is heard often throughout these wide open spaces. Nights are spent making sure they have enough sleep to help them be able to make it through the rest of camp while also allowing them to have fun during the experience. Waking up to find a girl standing over your bed saying she is sick is not a rare occurrence. Drama among the girls, cleanliness, sleepless nights, and being whatever support a girl needs in that moment can sometimes be challenging. Nonetheless rewarding life lessons can still be learned in moments such as these. A job like this me a whole new appreciation for the people who are willing to spend endless hours with children like a teacher does on a day to day basis. In closing, working at a Christian summer camp would be a great job for any young person because it is a way to spread God’s word with young girls, new limits will be set, and it exposes the counselor to see what teachers and educators face on a day to day basis. I recovered all of the goals I had set for myself for the same and was able to make new goals out of the experience as well. When leaving time came I had too many stories to be able to repeat and new standards set for myself to take home. My heart remains overjoyed today when I look back on the experiences I had. Smelling a strong odor can always take me back to the endless scent of dirty Chaco’s. Campers leave with dirty laundry and a stream of tears to follow for they dread seeing leaving day arrive. For me, I am already counting down days until opening day of camp next year while my heart searches for small things to take me back to that wonderful place on the mountain.

Empowerment of Talent Essay Example for Free

Empowerment of Talent Essay The present paper is an investigation of how empowerment of talent is being met in the present times when the entire world is going through rapid changes in almost all walks of life. The changes are bringing the countries of the world together creating a global village and it is the time of mutual gain and benefits. In this very scenario, all major players have been active in the race of competitive advantage that has spread world over. Diversity is what the world has witnessed and affected all the people either in cities of remotest villages. Technological and scientific advances are seen as the ultimate solution of the problems of the world. However, one thing that is seriously being talked about is the development and effective utilization of human capital at every area of work and society. This at once opens a wide door of arguments and conflicts between the nations especially between the developed and the developing world. This paper undertakes an extensive investigation at the issue of talent and the challenges present to the world in the empowerment and retention of talent world over. The paper looks at a number of different sources to gather a number of viewpoints to reach an analysis. In the conclusion section, the paper makes recommendation along with findings of the investigation. 2- Defining Globalization Different writers see the concept of globalization in different terms and diversified contexts. However, there is a common link between their definitions and explanation of the phenomenon of globalization in today’s discourse. For example, Samli (2002) defines globalization in the context of technological advances that have taken the entire world with a swing; other phenomenal milestones that the world has covered in the journey of globalization are outbursts of information and related technologies, common know-how that has been increasing dramatically with the advent of these new concepts, and financial flows that have seen almost every corner of the world: the rise of the corporate culture. According to the author major portion of globalization is to technological advancement of the world. The author defines technology as â€Å"the application of science to economic problems† . Technology is also important in the process of elevating general standard of living on earth. In terms of globalization technology has not only improved rapidly in the recent times, but also its transfer to the remotest areas of the world has made the overall progress a material reality. This is why people from one corner to another are connected with each other via, satellite, Internet, and so forth. Chasing the history of globalization takes us to the nineteenth century when, according to Samli (2002) â€Å"globalization was well on its way† . Such technological strides as telegraph, steamships, rail-road had back then started the process of globalization causing shrinkage for the entire world. Economies conversion with flow of capital continued as international migrations and information technology flows were driven by activities of trade and services which were constantly growing with a rapid pace. However, there are certain issues that the author brings to the reader’s attention in connection with globalization. As developing countries see a way out in globalization by benefitting from agricultural reforms and services provision (which still would be critical to their future development), it is highly required that this process be continued, otherwise the author sees a bungling up between opportunities and struggle to grow worldwide. We can sum up the definition of globalization by Samli that there are at least four important areas where this concept is fruitful for the entire world. 1) Possibilities for specialization and comparative advantage; 2) increased productivity through specialization; 3) more competition (both locally and internationally) and reduction of monopoly; 4) possibilities for transfer of technology and improved production worldwide. Hence, entrepreneurship is one answer to a number of challenges that the entire world is facing: adjustment to global challenges like empowerment of talent without fighting the war of talent is possible through the proper development of entrepreneurship

Sunday, July 21, 2019

Introduction To Social Media Analysis Marketing Essay

Introduction To Social Media Analysis Marketing Essay Introduction to Social Media Analysis People are talking about you-your company, your products, your people. With modern, digital communications tools, theyre publishing their thoughts to a worldwide audience. They write on blogs and in online communities, and they share pictures and videos on popular sites, such as MySpace and YouTube. Sometimes, the issues they raise show up on the front page of major newspapers. Paying attention to these online conversations is a new imperative for anyone who cares about their companys reputation. Social media analysis is the broad term for the services and tools you will use to pay attention. It incorporates monitoring, measuring and analyzing Internet-based social media, usually combining automated systems and human insight to turn raw data into useful information. Its most often used in marketing and communications/PR functions, which is why some people call it brand monitoring. But theres more to it than monitoring, and its not used only in marketing. Customer service, product groups, competitive intelligence, and investor relations-or any other relations function-will find useful information. Specialized applications for institutional investors, lenders and supply-chain managers are also available. If you use information, social media analysis opens vast new sources. Idea behind This Study: In Pakistan very few people are encouraging their businesses through social media marketing, mostly our internet usage went under the heading of ENTERTAINEMENT, so I want to study that where the actually person drive to while using Internet in Pakistan. How social media marketing can help us? How it is used, what are its advantages and disadvantages, how to build interest in it and those who are using and making money through it, how they are doing this? ABSTRACTS SAP A Company Transforms Itself Through Social Media This case study was written to demonstrate how a company can create a social networking platform that not only achieves its tactical goals of pushing company content to its target audience, but also broader, strategic purposes aligned with the companys corporate profile and brand. The study will look at technologies used to develop the SDN and the BPX networks, the quality of the user experience, and metrics achieved, as well as issues related to maintaining and growing the network. SAP, faced a new challenge. No longer was it content simply to be a developer of much of the worlds most successful business software. Instead, it wished to become a platform company, on its own Web-based platform solution: NetWeaver. That meant it had to open its platform to developers outside its own walls, who would drive innovative ways for businesses to use this platform to solve their business problems. It meant it had to talk to a huge, new audience that had not been part of its prior focus: developers across the globe that may or may not be SAP employees. Additional objectives included a desire to increase adoption of SAP products and to provide a platform of innovation for SAP and its partners. To obtain the main goal, SAP launches two new softwares for this called as SDN and BPX. Both networks are transparent, anyone can sign up, and both are searchable. Users can subscribe and obtain RSS feeds from the most popular bloggers, and all the content is accessible to social book-marking sites, as well as from Google and other search sites. SAP was formerly viewed as rigid, monolithic, and overly process-oriented but after adoption of Social Media it is now viewed as open and collaborative Methodology: Discussion Forums were opened up in Web page format where a rate of about 4,000 posts Per day was recorded. These were followed by blogs, initially contributed by employees, and quickly opened to outsiders. Active contributors include customers, consultants, and other opinion leaders, and the blogs feature everything from long-form essays on relevant topics to shorter bursts about future trends or interesting innovations. Conclusion Finally, the author speculates on how the success of the combined networks could lead to further revenue growth and enhancement of current corporate communications. How People Perceive Online Behavioral Advertising They performed a series of in-depth qualitative interviews with 14 subjects who answered advertisements to participate in a university study about Internet advertising. Subjects were not informed this study had to do with behavioral advertising privacy, but raised privacy concerns on their own unprompted. They asked, What are the best and worst things about Internet advertising? and what do you think about Internet advertising? Participants held a wide range of views ranging from enthusiasm about ads that inform them of new products and discounts they would not otherwise know about, to resignation that ads are a fact of life, to resentment of ads that they find insulting. Many participants raised privacy issues in the first few minutes of discussion without any prompting about privacy. They discovered that many participants have a poor understanding of how Internet advertising works, do not understand the use of first-party cookies, let alone third-party cookies, did not realize that behavioral advertising already takes place, believe that their actions online are completely anonymous unless they are logged into a website, and believe that there are legal protections that prohibit companies from sharing information they collect online. They found that participants have substantial confusion about the results of the actions they take within their browsers, do not understand the technology they work with now, and clear cookies as much out of a notion of hygiene as for privacy. They also found divergent views on what constitutes advertising. Industry self-regulation guidelines assume consumers can distinguish third-party widgets from first-party content, and further assume that consumers understand data flows to third-party advertisers. Instead, we find some people are not even aware of when they are being advertised to, let alone aware of what data is collected or how it is used. Methodology: A series of in-depth qualitative interviews with 14 subjects were conducted. A modified mental models protocol of semi-structured interviews were followed using standard preliminary questions for all participants while also following up individually to gather participants understanding of and reaction to behavioral advertising in particular. Conclusion Consumers have a very clear understanding of when and where Google search displays advertisements. However, consumers do not understand which parts of the New York Times website are advertisements. They lack the knowledge to distinguish widgets from first party content. Consequently, it is overly optimistic to believe consumers know their data flows to widget providers as a first party. THE VALUE OF A FACEBOOK FAN: AN EMPIRICAL REVIEW As Facebook matures as a viable marketing and customer service channel, many organizations are looking to quantify and understand the impact of their overall marketing investment on their business. Quantifying the Return on investment (ROI) of Facebook marketing efforts includes multiple variables and companies often fail to understand and to properly value their efforts in terms of the potential long-term business benefits of the Facebook channel. Many brands overcomplicate their measurement requirements by tracking dozens of independent variables. Many oversimplify by trying to apply a single number concept of value, and far too many fail to quantify ROI in such a way as to convince a CFO of the merit of increasing or shifting investment towards Facebook marketing. Syncapse has adopted a unique approach to understanding the financial returns that social members on Facebook provide to a business. Facebook fan ROI can be understood though a knowledge of key performance indicators that have traditionally led to increased sales and profit in business and the key differences between Facebook users who have opted to fan a brand and those who have not. This study will examine the five leading contributors to Facebook fan value. (1) Product Spending (2) Brand Loyalty, (3) Propensity to Recommend, (4) Brand Affinity and (5) Earned Media Value. Methodology The quantitative research for this Syncapse undertaking was conducted in conjunction with Hotspex Market Research and consisted of a 25-minute survey using their online panel. Data was collected from over 4,000 panelists across North America in June 2010. Conclusion As growing audiences migrate to social networks like Facebook, a brands ability to connect and influence these customers must shift from traditional marketing strategies. Facebook fans represent a significant opportunity to drive revenue enhancement, brand, and loyalty without incurring the considerable cost-per-person of conventional marketing. More importantly, such Facebook strategies allow for a discernable ROI that is not allowed by most other approaches. Fans are an extremely valuable segment of the Internet audience and should be addressed with specific strategies to nurture their ongoing participation and influence. Unlike traditional campaign-based marketing, Facebook-based marketing through well crafted fan utilization has no defined shelf life and can be more readily integrated into the day-to-day operation of the enterprise. Social Media Use in the United States: Implications for Health Communication Given the rapid changes in the communication landscape brought about by participative Internet use and social media, it is important to develop a better understanding of these technologies and their impact on health communication. The first step in this effort is to identify the characteristics of current social media users. Up-to-date reporting of current social media use will help monitor the growth of social media and inform health promotion/communication efforts aiming to effectively utilize social media. The purpose of the study is to identify the sociodemographic and health-related factors associated with current adult social media users in the United States. Methods: Data came from the 2007 iteration of the Health Information National Trends Study (HINTS, N = 7674). HINTS is a nationally representative cross-sectional survey on health-related communication trends and practices. Survey respondents who reported having accessed the Internet (N = 5078) were asked whether, over the past year, they had (1) participated in an online support group, (2) written in a blog, (3) visited a social networking site. Bivariate and multivariate logistic regression analyses were conducted to identify predictors of each type of social media use. Conclusions: Recent growth of social media is not uniformly distributed across age groups; therefore, health communication programs utilizing social media must first consider the age of the targeted population to help ensure that messages reach the intended audience. While racial/ethnic and health status-related disparities exist in Internet access, among those with Internet access, these characteristics do not affect social media use. This finding suggests that the new technologies, represented by social media, may be changing the communication pattern throughout the United States. DELIVRABLE II Introduction Billions of people create trillions of connections through social media each day, but few of us consider how each click and key press builds relationships that, in aggregate, form a vast social network. Passionate users of social media tools such as email, blogs, microblogs, and wikis eagerly send personal or public messages, post strongly felt opinions, or contribute to community knowledge to develop partnerships, promote cultural heritage, and advance development. Devoted social networkers create and share digital media and rate or recommend resources to pool their experiences, provide help for neighbors and colleagues, and express their creativity. The results are vast, complex networks of connections that link people to other people, documents, locations, concepts, and other objects. New tools are now available to collect, analyze, visualize, and generate insights from the collections of connections formed from billions of messages, links, posts, edits, uploaded photos and videos , reviews, and recommendations. As social media have emerged as a widespread platform for human interaction, the invisible ties that link each of us to others have become more visible and machine readable. The result is a new opportunity to map social networks in detail and scale never before seen. The complex structures that emerge from webs of social relationships can now be studied with computer programs and graphical maps that leverage the science of social network analysis to capture the shape and key locations within a landscape of ties and links. These maps can guide new journeys through social landscapes that were previously uncharted. Social network analysis is the application of the broader field of network science to the study of human relationships and connections. Social networks are primordial; they have a history that long predates systems like Facebook and Friendster, and even the first email message. Ever since anyone exchanged help with anyone else, social networks have existed, even if they were mostly invisible. Social networks are created from any collection of connections among a group of people and things. In the twenty-first century, network science has blossomed alongside a new global culture of commonplace networked communications. With widespread network connectivity, within just the past few decades, billions of people have changed their lives by creatively using social media. We use social media to bring our families and friends closer together, reach out to neighbors and colleagues, and invigorate markets for products and services. Social media are used to create connections that can bind local regions and span continents. These connections range from the trivial to the most valued, potent collaborations, relationships, and communities. Social media tools have been used successfully to create large-scale successful collaborative public projects like Wikipedia, open source software used by millions, new forms of political participation, and scientific collaboratories that accelerate research. Unheard of just a few years ago, today systems such as blogs, wikis, Twitter, and Facebook are now headline news with social and political implications that stretch around the globe. Despite the very different shapes, sizes, and goals of the institutions involved in social media, the common structure that unifies all social media spaces is a social network. All of these systems create connections that leave traces and collectively create networks. The rise of social media Social media are visible in the form of consumer applications such as Facebook and Twitter, but significant use of social media tools takes place behind the firewalls that surround most corporations, institutions, and organizations. Inside these enterprises employees share documents, post messages and engage in extensive discussions, document annotation, and create extensive patterns of connections with other employees and other resources. Social media tools cultivate the internal discussions that improve quality, lower costs, and enable the creation of customer and partner communities that offer new opportunities for coordination, marketing, advertising, and customer support. As enterprises adopt tools like email, message boards, blogs, wikis, document sharing, and activity streams, they generate a number of social network data structures. These networks contain information that has significant business value by exposing participants in the business network who play critical and unique roles. Some employees act as bridges or brokers between otherwise separated segments of the company. Others have patterns of connection that indicate that they serve as sources of information for many others. Social network analysis of organizations offers a form of MRI or x-ray image of the organizational structure of the company. These images illuminate the ways the members of the organization are actually structured in contrast to the formal hierarchies. Individual Contributions Generate Public Wealth Collections of individual social media contributions can create vast, often beneficial, yet complex social institutions. Seeing the social media forest, and not just the trees, branches, and leaves, requires tools that can assemble, organize, and present an integrated view of large volumes of records of interactions. Building a better view of the social media landscape of connection can lead to improved user interfaces and policies that increase individual contributions and their quality. It can lead to better management tools and strategies that help individuals, organizations, and governments to more effectively apply social media to their priorities. However, dangerous criminals, malicious vandals, promoters of racial hatred, and oppressive governments can also use social media tools to enable destructive activities. Critics of social media warn of the dangers of lost responsibility and respect for creative contributions, when vital resources are assembled from many small pieces [1]. These dangers heighten interest in understanding how social media phenomena can be studied, improved, and protected. Why do some groups of people succeed in using these tools while many others fail? Community managers and participants can learn to use social network maps of their social media spaces to cultivate their best features and limit negative outcomes. Social network measures and maps can be used to gain insights into collective activity and guide optimization of their productive capacity while limiting the destructive forces that plague most efforts at computer-mediated communications. People interested in cultivating these communities can m easure and map social media activity in order to compare and contrast social media efforts to one another. Around the world, community stakeholders, managers, leaders, and members have found that they can all benefit from learning how to apply social network analysis methods to study, track, and compare the dynamics of their communities and the influence of individual contributions. Business leaders and analysts can study enterprise social networks to improve the performance of organizations by identifying key contributors, locating gaps or disconnections across the organization, and discovering important documents and other digital objects. Marketing and service directors can use social media network analysis to guide the promotion of their products and services, track compliments and complaints, and respond to priority customer requests. Community managers can apply these techniques to public-facing systems that gather people around a common interest and ensure that socially productive relationships are established. Social media tools have become central to national priorities requiring government agency leaders to become skillful in building and managing their communities and connections. Governments at all levels must learn to optimize and sustain social media tools for public health information dissemination, disaster response, energy conservation, environmental protection, community safety, and more. Background to the Problem Billions of people now weave a complex collection of email, Twitter, mobile short text messages, shared photos, podcasts, audio and video streams, blogs, wikis, discussion groups, virtual reality game environments, and social networking sites like Facebook and MySpace to connect them to the world and the people they care about. Twitter enable short exchanges ideal for efficiently pointing out resources or knowing what conferences people are attending, while discouraging in-depth discussion and analysis on the platform itself. In contrast, traditional blogs without length limitations and with their support for sharing multimedia content and comments are better suited for more in-depth presentations and conversations. Other media including books, newspapers, wikis, email, social networking sites, and so forth each have a set of properties that create a unique terrain of interaction. Learning to effectively meet your objectives using social media requires an understanding of that terrain and the social practices that have grown up around its use. One of the most exciting aspects of online social media tools is that they produce an enormous amount of social data that can be used to better understand the people, organizations, and communities that inhabit them. More specifically, they create relational data: information about who knows or is friends with whom, who talks to whom, who hangs out in the same places, and who enjoys the same things. Social Media Design Framework Social media systems come in a variety of forms and support numerous genres of interaction. Although they all connect individuals, they do so in dramatically different ways depending in part on the technical design choices that determine questions like these: Who can see what? Who can reply to whom? How long is content visible? What can link to what? Who can link to whom? Social media services vary in terms of their intended number of producers and consumers. An email is usually authored by just one person, whereas a wiki document is likely to be authored by several or even hundreds of people. An individually authored email might be sent to just one other person or be broadcasted to thousands. More generally, social media tools support different scales of production and consumption of digital objects. Many social media tools help individuals or small groups interact. Instant messaging (IM), video chat, and personal messaging within general-purpose social networking sites provide intimate communication channels comparable to phone calls and face-to-face office meetings. Social media can help individuals reach out to medium-sized groups of friends or acquaintances by broadcasting a personal message (e.g., a tweet sent to a users followers on Twitter; a post sent to a departmental email list) or allowing others to overhear a comment (a post to someones Facebook wall). They can also allow individuals to reach large groups through popular blog posts, podcasts, videos posted on sites like YouTube, or updates on Twitter by companies or celebrities with numerous followers. Purpose of the Research Thousands of people are earning huge amount of money through Social Media, This research will help to understand that when one person enters into social media, what is his path and where does he go? This Research will also help to understand that how can a person earn from a particular social network website. Research Questions To understand the pattern of browsing of individuals using social media network. To check the awareness among the people of earning through social media. How an individual can earn through Social Media in Pakistan? How this pattern can be used to gain maximum output in online advertising. DELIVERABLE III Medium of Research Social Network Theory Social network analysis is the application of the broader field of network science to the study of human relationships and connections. Social networks are primordial; they have a history that long predates systems like Facebook and Friendster, and even the first email message. Ever since anyone exchanged help with anyone else, social networks have existed, even if they were mostly invisible. Social networks are created from any collection of connections among a group of people and things. Social network analysis helps you explore and visualize patterns found within collections of linked entities that include people. From the perspective of social network analysis, the treelike org-chart that commonly represents the hierarchical structure of an organization or enterprise is too simple and lacks important information about the cross connections that exist between and across departments and divisions. In contrast with the simplified tree structure of an org-chart, a social network view of an organization or population leads to the creation of visualizations that resemble maps of highway systems, airline routes, or rail networks Network analysts see the world as a collection of interconnected pieces. Those studying social networks see relationships as the building blocks of the social world, each set of relationships combining to create emergent patterns of connections among people, groups, and things. The focus of social network analysis is between, not within people. Whereas traditional social science research methods such as surveys focus on individuals and their attributes (e.g., gender, age, income), network scientists focus on the connections that bind individuals together, not exclusively on their internal qualities or abilities. This change in focus from attribute data to relational data dramatically affects how data are collected, represented, and analyzed. Social network analysis complements methods that focus more narrowly on individuals, adding a critical dimension that captures the connective tissue of societies and other complex interdependencies. Once a set of social media networks has been constructed and social network measurements have been calculated, the resulting data set can be used for many applications. For example, network data sets can be used to create reports about community health, comparisons of subgroups, and identification of important individuals, as well as in applications that rank, sort, compare, and search for content and experts. The value of a social network approach is the ability to ask and answer questions that are not available to other methods. This means focusing on relationships. Although analysts, marketers, and administrators often track social media participation statistics, they rarely consider relationships. Traditional participation statistics can provide important insights about the engagement of a community, but can say little about the connections between community members. Network analysis can help explain important social phenomena such as group formation, group cohesion, social roles , personal influence, and overall community health. Significance of Research Social media marketing is the process of promoting your site or business through social media channels and it is a powerful strategy that will get you links, attention and massive amounts of traffic. There is no other low-cost promotional method out there that will easily give you large numbers of visitors, some of whom may come back to your website again and again. If you are selling products/services or just publishing content for ad revenue, social media marketing is a potent method that will make your site profitable over time. (Maki, 2007) Limitations Following are the issues which could be faced during the research: Lack of awareness of utilizing social media among people in Pakistan. No prior research available. DELIVERABLE IV LITERATURE REVIEW Social Media Marketing Social media marketing is the way of promoting your business or sites through different social media channels and it is an effective plan that will surge traffic to your sites, get more links and grasp the attention of people. It is one of the low cost website or product publicity method. It grasps more number of visitors, some of whom may repeatedly visit the website. If you are in a business which deals with selling products or services, then social media marketing is one of the effective ways that will make your site profitable over a period of time. Those who do not know the worth of social media sites fall into three different categories (1) the one who do not anything about social media marketing (2) one who are interested and do not know to use the social media (3)the one who do not have confidence in the worth that a social media marketing can bring in. Why Social Media? Nowadays Blogs are even ranked higher than the static websites because of their relevant content and their fresh thoughts which meet the searches criteria to come on top positions. The more links you have the better your site will be ranked by search engines. When your website receives more natural permanent links, it builds more authenticity of your site and builds search engine trust on your website. This helps to get ranked even with competitive keywords. social media is a essential tool for promoting your site and its really a worthy method to get into fame. (1)Social media marketing helps you to get more natural links to your site and your website is exposed in front of more people which helps yo to drive more traffic on your website. (2)Its a dependable method, if you utilize it properly and successfully, social communities help you to drive traffic more than the previous amount traffic you received from the search engine. (3) Social media marketing is a community based marketing method, and this doesnt harm several other methods that drive traffic to the websites regularly Social media marketing helps to get famous all around the globe. Social media sites includes online communities, social networks, blogs, wikis and another type of media for marketing, sales and customer support.. The different types of social media marketing tools include facebook, orkut, hi5, twitter, Linkedin, blogs, YouTube and Flickr. The social media marketing acts as a cheapeest method of advertising. It is seen that social media marketing acts as one of the leading business venue to use. Nowadays business technology buyers participate more socially to promote their business. Building a attractive website may take more amount of time and efforts. Getting ranked in search engine can take years, in order to build a competitive position. Social media marketing helps you to get huge amount of traffic in a single day. Once you become aware of social media tools the it would be easy for you to grab audience and to satisfy their needs. The fast growth of Social Media Marketing shows the future of internet in social media marketing. The major players in the social media market may revolutionize themselves in course off time and online business peoples also have to change their trends according to it. With social media marketing you can easily compete with the counterparts and attain the end in concern. The Value of Marketing through Social News Websites For those who dont understand or see the value of social media websites, lets take a look at the benefits of creating viral content and effectively promoting them through social media channels. Developing link baits and successfully getting it popular on various social media websites like Digg and StumbleUpon will lead to multiple benefits for any website: Primary and Secondary Traffic. Primary traffic is the large amount of visitors who come directly from social media websites. Secondary traffic is referral traffic from websites which link to and send you visitors, after they come across your content through the social sites. High Quality Links. Becoming popular on social news websites like Digg or Reddit will get you a large number of links, some of which

Saturday, July 20, 2019

Essay on Behavior in All Quiet on the Western Front and Lord of the Fli

Comparison of Human Behavior in All Quiet on the Western Front and Lord of the Flies  Ã‚         An author's view of human behavior is often reflected in their works. The novels All Quiet on the Western Front by Erich Maria Remarque and Lord of the Flies by William Golding are both examples of works that demonstrate their author's view of man, as well his opinion of war. Golding's Lord of the Flies is highly demonstrative of Golding's opinion that society is a thin and fragile veil that when removed shows man for what he truly is, a savage animal. Perhaps the best demonstration of this given by Golding is Jack's progression to the killing of the sow. Upon first landing on the island Jack, Ralph, and Simon go to survey their new home. Along the way the boys have their first encounter with the island's pigs. They see a piglet caught in some of the plants. Quickly Jack draws his knife so as to kill the piglet. Instead of completing the act, however, Jack hesitates. Golding states that, "The pause was only long enough for them to realize the enormity of what the downward stroke would be." Golding is suggesting that the societal taboos placed on killing are still ingrained within Jack. The next significant encounter in Jack's progression is his first killing of a pig. There is a description of a great celebration. The boys chant "Kill the pig. Cut her thr oat. Spill her blood." It is clear from Golding's description of the revelry that followed the killing that the act of the hunt provided the boys with more than food. The action of killing another living thing gives them pleasure. The last stage in Jack's metamorphosis is demonstrated by the murder of the sow. Golding describes the killing almost as a rape. He says, "Jack was on... ...ough the actions of his characters, attempts to illustrate that under chaotic circumstances, when removed normal society, man reverts to what his nature deems him to be, a destructive creature. Remarque's characters, on the other hand, manage to show compassion and humane treatment of others despite being thrust into a situation more terrible than that of Golding's characters. Where Golding feels war is a result of humankind's vile nature, Remarque sees it as an evil brought about by only a select few.    Works Cited Golding, William. Lord of the Flies. New York: Berkley, 1954. Babb, Howard S. The Novels of William Golding. N.p.: Ohio State UP, 1970. Beetz, Kirk H., ed. Beacham's Encyclopedia of Popular Fiction. Vol. 5. Osprey: n.p., 1996. 5 vols. Epstein, E. L. Afterword. Lord of the Flies. By William Golding. New York: Berkley, 1954.   

Canada Should Sell Water to America Essay -- Argumentative Essays

Since more than 70% of the Earth is covered with water, one would assume that there is enough water for everyone. However, this statement would be incorrect. Only 3% of that water is considered usable and 2% of the usable water is locked in the polar ice caps. This leaves 1% of that water for the use of humans. Canada possesses a substantial amount of this water, while other countries are less fortunate. One of these countries is the United States of America, the biggest users of water in the world. They are looking for a new source of water and have been hoping Canada can be this new source. The Canadian government should accept the proposal to sell water in bulk to the United States due to the availability, the safety and the economic opportunities it would bring. Water is easily available to Canadians. According to Report Newsmagazine, Canada possesses 20% of the world’s Fresh Water. Report also states that Canada possesses only 0.5% of the world’s population. This means that on a per capita basis, Canada has more water than any other nation. Furthermore, water is a renewable resource, which means that once it is used, it may be used again after the water cycle. Many other materials Canada sells to the United States are not renewable. Dennis Owens, the senior Frontier Centre analyst says, â€Å"Here we are giving non-renewable oil and gas to the U.S., then water falls from the sky and goes into the ocean and we won’t give it to them.† In Newfoundland, Gisbourne Lake has the potential to drain 500,000 cubic meters of water per week. This drainage would only lower the level of the lake one inch and this would naturally be replenished within ten hours. Canada has cut down trees that will take 100 years to grow back a nd sold them. S... ...e-not’ province related to others† Manitoba could now have the potential to become just as industrialized and important as a province such as Ontario. The whole of Canada would benefit economically from water schemes. Selling water to the United States would be possible, safe and would create numerous economic opportunities, which Canada can not afford to pass up. Canada has access to more fresh water than any other country, which the Canadian citizens will not use. Sharing this water with the United States, and getting something back in return, would be safe to the ecology and Canada will still have enough water for themselves. The water will always be waiting there, however the economic opportunity is one that would have to be taken advantage of now. The United States will not wait forever for Canada to make a decision. The Canadian government needs to act now!

Friday, July 19, 2019

Technology & Business Essay -- essays research papers

Preliminary draft of Q1: The Internet has opened up a range of new marketing opportunities for the commerce world. A wide range of advertisement options are available: Banner ads Text ads Popups Targeted ads The last is of particular interest because it can provide advertisements targeted at the location of the user (country, town etc). Take this example: Your business has a small advertising budget and you want to do some Internet advertising. Every click your ad gets, that costs you money. Say you only operate in Masterton, do you want people from Auckland or USA to see your ads and click on them, hence costing you money? Internet advertising is much cheaper than TV marketing. The bid for a 30 second advertising slot in the Superbowl went for a record 2.4 million dollars. Most Internet Advertising agents do not charge you for the amount of times your ad is viewed, but by the amount of people who click on it, this makes it much cheaper than TV advertising. Google Adsense program uses an advanced computer program to analyze the content of the page and deliver ads relevant to the page content. For example: Your looking at a page reviewing books, the targeted ads will show bookstores in your area. http://www.gaebler.com/Television-Advertising-Costs.htm https://adwords.google.com/select/main?cmd=Login&sourceid=AWO&subid=US-ET-ADS&hl=en_US https://www.google.com/adsense/?sourceid=aso&subid=ww-et-awhomegap&hl=en_US Preliminary draft of Q2: Since the...

Thursday, July 18, 2019

? Analyses and Compare the Physical Storage Structures and Types of Available Index of the Latest Versions of: 1. Oracle 2. Sql Server 3. Db2 4. Mysql 5. Teradata

Assignment # 5 (Individual) Submission 29 Dec 11 Objective: To Enhance Analytical Ability and Knowledge * Analyses and Compare the Physical Storage Structures and types of available INDEX of the latest versions of: 1. Oracle 2. SQL Server 3. DB2 4. MySQL 5. Teradata First of all define comparative framework. Recommend one product for organizations of around 2000-4000 employees with sound reasoning based on Physical Storage Structures Introduction to Physical Storage Structures One characteristic of an RDBMS is the independence of logical data structures such as  tables,  views, and  indexes  from physical storage structures.Because physical and logical structures are separate, you can manage physical storage of data without affecting access to logical structures. For example, renaming a database file does not rename the tables stored in it. The following sections explain the physical database structures of an Oracle database, including datafiles, redo log files, and control f iles. Datafiles Every Oracle database has one or more physical  datafiles. The datafiles contain all the database data. The data of logical database structures, such as tables and indexes, is physically stored in the datafiles allocated for a database.The characteristics of datafiles are: * A datafile can be associated with only one database. * Datafiles can have certain characteristics set to let them automatically extend when the database runs out of space. * One or more datafiles form a logical unit of database storage called a tablespace. Data in a datafile is read, as needed, during normal database operation and stored in the memory cache of Oracle. For example, assume that a user wants to access some data in a table of a database. If the requested information is not already in the memory cache for the database, then it is read from the appropriate atafiles and stored in memory. Modified or new data is not necessarily written to a datafile immediately. To reduce the amount of disk access and to increase performance, data is pooled in memory and written to the appropriate datafiles all at once, as determined by the  database writer process (DBWn)  background process. Control Files Every Oracle database has a  control file. A control file contains entries that specify the physical structure of the database. For example, it contains the following information: * Database name * Names and locations of datafiles and redo log files * Time stamp of database creationOracle can  multiplex  the control file, that is, simultaneously maintain a number of identical control file copies, to protect against a failure involving the control file. Every time an  instance  of an Oracle database is started, its control file identifies the database and redo log files that must be opened for database operation to proceed. If the physical makeup of the database is altered, (for example, if a new datafile or redo log file is created), then the control file is autom atically modified by Oracle to reflect the change. A control file is also used in database recovery. Redo Log FilesEvery Oracle database has a set of two or more  redo log files. The set of redo log files is collectively known as the redo log for the database. A redo log is made up of redo entries (also called  redo records). The primary function of the redo log is to record all changes made to data. If a failure prevents modified data from being permanently written to the datafiles, then the changes can be obtained from the redo log, so work is never lost. To protect against a failure involving the redo log itself, Oracle allows a  multiplexed redo log  so that two or more copies of the redo log can be maintained on different disks.The information in a redo log file is used only to recover the database from a system or media failure that prevents database data from being written to the datafiles. For example, if an unexpected power outage terminates database operation, then data in memory cannot be written to the datafiles, and the data is lost. However, lost data can be recovered when the database is opened, after power is restored. By applying the information in the most recent redo log files to the database datafiles, Oracle restores the database to the time at which the power failure occurred.The process of applying the redo log during a recovery operation is called  rolling forward. Archive Log Files You can enable automatic archiving of the redo log. Oracle automatically archives log files when the database is in  ARCHIVELOG  mode. Parameter Files Parameter files contain a list of configuration parameters for that instance and database. Oracle recommends that you create a server parameter file (SPFILE) as a dynamic means of maintaining initialization parameters. A server parameter file lets you store and manage your initialization parameters persistently in a server-side disk file.Alert and Trace Log Files Each server and background proces s can write to an associated trace file. When an internal error is detected by a process, it dumps information about the error to its trace file. Some of the information written to a trace file is intended for the database administrator, while other information is for Oracle Support Services. Trace file information is also used to tune applications and instances. The alert file, or alert log, is a special trace file. The alert file of a database is a chronological log of messages and errors. Backup Files To restore a file is to replace it with a backup file.Typically, you restore a file when a media failure or user error has damaged or deleted the original file. User-managed backup and recovery requires you to actually restore backup files before you can perform a trial recovery of the backups. Server-managed backup and recovery manages the backup process, such as scheduling of backups, as well as the recovery process, such as applying the correct backup file when recovery is needed . A database  instance  is a set of memory structures that manage database files. Figure 11-1  shows the relationship between the instance and the files that it manages.Figure 11-1 Database Instance and Database Files Mechanisms for Storing Database Files Several mechanisms are available for allocating and managing the storage of these files. The most common mechanisms include: 1. Oracle Automatic Storage Management (Oracle ASM) Oracle ASM includes a file system designed exclusively for use by Oracle Database. 2. Operating system file system Most Oracle databases store files in a  file system, which is a data structure built inside a contiguous disk address space. All operating systems have  file managers that allocate and deallocate disk space into files within a file system.A file system enables disk space to be allocated to many files. Each file has a name and is made to appear as a contiguous address space to applications such as Oracle Database. The database can creat e, read, write, resize, and delete files. A file system is commonly built on top of a  logical volume  constructed by a software package called a  logical volume manager (LVM). The LVM enables pieces of multiple physical disks to be combined into a single contiguous address space that appears as one disk to higher layers of software. 3. Raw device Raw devices  are disk partitions or logical volumes not formatted with a file system.The primary benefit of raw devices is the ability to perform  direct I/O  and to write larger buffers. In direct I/O, applications write to and read from the storage device directly, bypassing the operating system buffer cache. 4. Cluster file system A  cluster file system  is software that enables multiple computers to share file storage while maintaining consistent space allocation and file content. In an Oracle RAC environment, a cluster file system makes shared storage appears as a file system shared by many computers in a clustered env ironment.With a cluster file system, the failure of a computer in the cluster does not make the file system unavailable. In an operating system file system, however, if a computer sharing files through NFS or other means fails, then the file system is unavailable. A database employs a combination of the preceding storage mechanisms. For example, a database could store the control files and online redo log files in a traditional file system, some user data files on raw partitions, the remaining data files in Oracle ASM, and archived the redo log files to a cluster file system. Indexes in OracleThere are several types of indexes available in Oracle all designed for different circumstances: 1. b*tree indexes – the most common type (especially in OLTP environments) and the default type 2. b*tree cluster indexes – for clusters 3. hash cluster indexes – for hash clusters 4. reverse key indexes – useful in Oracle Real Application Cluster (RAC) applications 5. bi tmap indexes – common in data warehouse applications 6. partitioned indexes – also useful for data warehouse applications 7. function-based indexes 8. index organized tables 9. domain indexesLet's look at these Oracle index types in a little more detail. B*Tree Indexes B*tree stands for balanced tree. This means that the height of the index is the same for all values thereby ensuring that retrieving the data for any one value takes approximately the same amount of time as for any other value. Oracle b*tree indexes are best used when each value has a high cardinality (low number of occurrences)for example primary key indexes or unique indexes. One important point to note is that NULL values are not indexed. They are the most common type of index in OLTP systems. B*Tree Cluster IndexesThese are B*tree index defined for clusters. Clusters are two or more tables with one or more common columns and are usually accessed together (via a join). CREATE INDEX product_orders_ix O N CLUSTER product_orders; Hash Cluster Indexes In a hash cluster rows that have the same hash key value (generated by a hash function) are stored together in the Oracle database. Hash clusters are equivalent to indexed clusters, except the index key is replaced with a hash function. This also means that here is no separate index as the hash is the index. CREATE CLUSTER emp_dept_cluster (dept_id NUMBER) HASHKEYS 50; Reverse Key IndexesThese are typically used in Oracle Real Application Cluster (RAC) applications. In this type of index the bytes of each of the indexed columns are reversed (but the column order is maintained). This is useful when new data is always inserted at one end of the index as occurs when using a sequence as it ensures new index values are created evenly across the leaf blocks preventing the index from becoming unbalanced which may in turn affect performance. CREATE INDEX emp_ix ON emp(emp_id) REVERSE; Bitmap Indexes These are commonly used in data warehouse app lications for tables with no updates and whose columns have low cardinality (i. . there are few distinct values). In this type of index Oracle stores a bitmap for each distinct value in the index with 1 bit for each row in the table. These bitmaps are expensive to maintain and are therefore not suitable for applications which make a lot of writes to the data. For example consider a car manufacturer which records information about cars sold including the colour of each car. Each colour is likely to occur many times and is therefore suitable for a bitmap index. CREATE BITMAP INDEX car_col ON cars(colour) REVERSE; Partitioned IndexesPartitioned Indexes are also useful in Oracle datawarehouse applications where there is a large amount of data that is partitioned by a particular dimension such as time. Partition indexes can either be created as local partitioned indexes or global partitioned indexes. Local partitioned indexes mean that the index is partitioned on the same columns and wit h the same number of partitions as the table. For global partitioned indexes the partitioning is user defined and is not the same as the underlying table. Refer to the create index statement in the Oracle SQL language reference for details. Function-based IndexesAs the name suggests these are indexes created on the result of a function modifying a column value. For example CREATE INDEX upp_ename ON emp(UPPER(ename((; The function must be deterministic (always return the same value for the same input). Index Organized Tables In an index-organized table all the data is stored in the Oracle database in a B*tree index structure defined on the table's primary key. This is ideal when related pieces of data must be stored together or data must be physically stored in a specific order. Index-organized tables are often used for information retrieval, spatial and OLAP applications.Domain Indexes These indexes are created by user-defined indexing routines and enable the user to define his or h er own indexes on custom data types (domains) such as pictures, maps or fingerprints for example. These types of index require in-depth knowledge about the data and how it will be accessed. Indexes in Sql Server Index type| Description| Clustered| A clustered index sorts and stores the data rows of the table or view in order based on the clustered index key. The clustered index is implemented as a B-tree index structure that supports fast retrieval of the rows, based on their clustered index key values. Nonclustered| A nonclustered index can be defined on a table or view with a clustered index or on a heap. Each index row in the nonclustered index contains the nonclustered key value and a row locator. This locator points to the data row in the clustered index or heap having the key value. The rows in the index are stored in the order of the index key values, but the data rows are not guaranteed to be in any particular order unless a clustered index is created on the table. | Unique| A unique index ensures that the index key contains no duplicate values and therefore every row in the table or view is in some way unique.Both clustered and nonclustered indexes can be unique. | Index with included columns| A nonclustered index that is extended to include nonkey columns in addition to the key columns. | Full-text| A special type of token-based functional index that is built and maintained by the Microsoft Full-Text Engine for SQL Server. It provides efficient support for sophisticated word searches in character string data. | Spatial| A spatial index provides the ability to perform certain operations more efficiently on spatial objects (spatial data) in a column of the  geometry  data type.The spatial index reduces the number of objects on which relatively costly spatial operations need to be applied. | Filtered| An optimized nonclustered index especially suited to cover queries that select from a well-defined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered index can improve query performance, reduce index maintenance costs, and reduce index storage costs compared with full-table indexes. | XML| A shredded, and persisted, representation of the XML binary large objects (BLOBs) in the  xml  data type column. | SQL Server Storage StructuresSQL Server does not see data and storage in exactly the same way a DBA or end-user does. DBA sees initialized devices, device fragments allocated to databases, segments defined within Databases, tables defined within segments, and rows stored in tables. SQL Server views storage at a lower level as device fragments allocated to databases, pages allocated to tables and indexes within the database, and information stored on pages. There are two basic types of storage structures in a database. * Linked data pages * Index trees. All information in SQL Server is stored at the page level. When a database is created, all spaceAllocated to it is divid ed into a number of pages, each page 2KB in size. There are five types of pages within SQL Server: 1. Data and log pages 2. Index pages 3. Text/image pages 4. Allocation pages 5. Distribution pages All pages in SQL Server contain a page header. The page header is 32 bytes in size and contains the logical page number, the next and previous logical page numbers in the page linkage, the object_id of the object to which the page belongs, the minimum row size, the next available row number within the page, and the byte location of the start of the free space on the page.The contents of a page header can be examined by using the dbcc page command. You must be logged in as sa to run the dbcc page command. The syntax for the dbcc page command is as follows: dbcc page (dbid | page_no [,0 | 1 | 2]) The SQL Server keeps track of which object a page belongs to, if any. The allocation of pages within SQL Server is managed through the use of allocation units and allocation pages. Allocation Pages Space is allocated to a SQL Server database by the create database and alter database commands. The space allocated to a database is divided into a number of 2KB pages.Each page is assigned a logical page number starting at page 0 and increased sequentially. The pages are then divided into allocation units of 256 contiguous 2KB pages, or 512 bytes (1/2 MB) each. The first page of each allocation unit is an allocation page that controls the allocation of all pages within the allocation unit. The allocation pages control the allocation of pages to tables and indexes within the database. Pages are allocated in contiguous blocks of eight pages called extents. The minimum unit of allocation within a database is an extent.When a table is created, it is initially assigned a single extent, or 16KB of space, even if the table contains no rows. There are 32 extents within an allocation unit (256/8). An allocation page contains 32 extent structures for each extent within that allocation unit. Each extent structure is 16 bytes and contains the following information: 1. Object ID of object to which extent is allocated 2. Next extent ID in chain 3. Previous extent ID in chain 4. Allocation bitmap 5. Deallocation bitmap 6. Index ID (if any) to which the extent is allocated 7. StatusThe allocation bitmap for each extent structure indicates which pages within the allocated extent are in use by the table. The deallocation bit map is used to identify pages that have become empty during a transaction that has not yet been completed. The actual marking of the page as unused does not occur until the transaction is committed, to prevent another transaction from allocating the page before the transaction is complete. Data Pages A data page is the basic unit of storage within SQL Server. All the other types of pages within a database are essentially variations of the data page.All data pages contain a 32-byte header, as described earlier. With a 2KB page (2048 bytes) this leaves 2016 bytes for storing data within the data page. In SQL Server, data rows cannot cross page boundaries. The maximum size of a single row is 1962 bytes, including row overhead. Data pages are linked to one another by using the page pointers (prevpg, nextpg) contained in the page header. This page linkage enables SQL Server to locate all rows in a table by scanning all pages in the link. Data page linkage can be thought of as a two-way linked list.This enables SQL Server to easily link new pages into or unlink pages from the page linkage by adjusting the page pointers. In addition to the page header, each data page also contains data rows and a row offset table. The row-offset table grows backward from the end of the page and contains the location or each row on the data page. Each entry is 2 bytes wide. Data Rows Data is stored on data pages in data rows. The size of each data row is a factor of the sum of the size of the columns plus the row overhead. Each record in a data page is assi gned a row number. A single byte is used within each row to store the row number.Therefore, SQL Server has a maximum limit of 256 rows per page, because that is the largest value that can be stored in a single byte (2^8). For a data row containing all fixed-length columns, there are four bytes of overhead per row: 1. Byte to store the number of variable-length columns (in this case, 0) 1 byte to store the row number. 2. Bytes in the row offset table at the end of the page to store the location of the row on the page. If a data row contains variable-length columns, there is additional overhead per row. A data row is variable in size if any column is defined as varchar, varbinary, or allows null values.In addition to the 4 bytes of overhead described previously, the following bytes are required to store the actual row width and location of columns within the data row: 2 bytes to store the total row width 1 byte per variable-length column to store the starting location of the column wi thin the row 1 byte for the column offset table 1 additional byte for each 256-byte boundary passed Within each row containing variable-length columns, SQL Server builds a column offset table backward for the end of the row for each variable-length column in the table.Because only 1 byte is used for each column with a maximum offset of 255, an adjust byte must be created for each 256-byte boundary crossed as an additional offset. Variable-length columns are always stored after all fixed-length columns, regardless of the order of the columns in the table definition. Estimating Row and Table Sizes Knowing the size of a data row and the corresponding overhead per row helps you determine the number of rows that can be stored per page.The number of rows per page affects the system performance. A greater number of rows per page can help query performance by reducing the number of ages that need to be read to satisfy the query. Conversely, fewer rows per page help improve performance for c oncurrent transactions by reducing the chances of two or more users accessing rows on the same page that may be locked. Let's take a look at how you can estimate row and table sizes. Fixed-length fields with no null values.Sum of column widths overhead- The Row Offset Table The location of a row within a page is determined by using the row offset table at the end of the page. To find a specific row within the page, SQL Server looks in the row offset table for the starting byte address within the data page for that row ID. Note that SQL Server keeps all free space at the end of the data page, shifting rows up to fill in where a previous row was deleted and ensuring no space fragmentation within the page.If the offset table contains a zero value for a row ID that indicates that the row has been deleted. Index Structure All SQL Server indexes are B-Trees. There is a single root page at the top of the tree, branching out into N number of pages at each intermediate level until it reaches the bottom, or leaf level, of the index. The index tree is traversed by following pointers from the upper-level pages down through the lower-level pages. In addition, each index level is a separate page chain. There may be many intermediate levels in an index.The number of levels is dependent on the index key width, the type of index, and the number of rows and/or pages in the table. The number of levels is important in relation to index performance. Non-clustered Indexes A non-clustered index is analogous to an index in a textbook. The data is stored in one place, the index in another, with pointers to the storage location of the data. The items in the index are stored in the order of the index key values, but the information in the table is stored in a different order (which can be dictated by a clustered index).If no clustered index is created on the table, the rows are not guaranteed to be in any particular order. Similar to the way you use an index in a book, Microsoft ® SQL Serverâ„ ¢ 2000 searches for a data value by searching the non-clustered index to find the location of the data value in the table and then retrieves the data directly from that location. This makes non-clustered indexes the optimal choice for exact match queries because the index contains entries describing the exact location in the table of the data values being searched for in the queries.If the underlying table is sorted using a clustered index, the location is the clustering key value; otherwise, the location is the row ID (RID) comprised of the file number, page number, and slot number of the row. For example, to search for an employee ID (emp_id) in a table that has a non-clustered index on the emp_id column, SQL Server looks through the index to find an entry that lists the exact page and row in the table where the matching emp_id can be found, and then goes directly to that page and row. Clustered IndexesA clustered index determines the physical order of data in a table . A clustered index is analogous to a telephone directory, which arranges data by last name. Because the clustered index dictates the physical storage order of the data in the table, a table can contain only one clustered index. However, the index can comprise multiple columns (a composite index), like the way a telephone directory is organized by last name and first name. Clustered Indexes are very similar to Oracle's IOT's (Index-Organized Tables).A clustered index is particularly efficient on columns that are often searched for ranges of values. After the row with the first value is found using the clustered index, rows with subsequent indexed values are guaranteed to be physically adjacent. For example, if an application frequently executes a query to retrieve records between a range of dates, a clustered index can quickly locate the row containing the beginning date, and then retrieve all adjacent rows in the table until the last date is reached. This can help increase the perf ormance of this type of query.Also, if there is a column(s) that is used frequently to sort the data retrieved from a table, it can be advantageous to cluster (physically sort) the table on that column(s) to save the cost of a sort each time the column(s) is queried. Clustered indexes are also efficient for finding a specific row when the indexed value is unique. For example, the fastest way to find a particular employee using the unique employee ID column emp_id is to create a clustered index or PRIMARY KEY constraint on the emp_id column.Note  Ã‚  PRIMARY KEY constraints create clustered indexes automatically if no clustered index already exists on the table and a non-clustered index is not specified when you create the PRIMARY KEY constraint. Index Structures Indexes are created on columns in tables or views. The index provides a fast way to look up data based on the values within those columns. For example, if you create an index on the primary key and then search for a row of data based on one of the primary key values, SQL Server first finds that value in the index, and then uses the index to quickly locate the entire row of data.Without the index, a table scan would have to be performed in order to locate the row, which can have a significant effect on performance. You can create indexes on most columns in a table or a view. The exceptions are primarily those columns configured with large object (LOB) data types, such as  image,  text,  and  varchar(max). You can also create indexes on XML columns, but those indexes are slightly different from the basic index and are beyond the scope of this article. Instead, I'll focus on those indexes that are implemented most commonly in a SQL Server database.An index is made up of a set of pages (index nodes) that are organized in a B-tree structure. This structure is hierarchical in nature, with the root node at the top of the hierarchy and the leaf nodes at the bottom, as shown in Figure 1. Figure 1: B-t ree structure of a SQL Server index When a query is issued against an indexed column, the query engine starts at the root node and navigates down through the intermediate nodes, with each layer of the intermediate level more granular than the one above. The query engine continues down through the index nodes until it reaches the leaf node.For example, if you’re searching for the value 123 in an indexed column, the query engine would first look in the root level to determine which page to reference in the top intermediate level. In this example, the first page points the values 1-100, and the second page, the values 101-200, so the query engine would go to the second page on that level. The query engine would then determine that it must go to the third page at the next intermediate level. From there, the query engine would navigate to the leaf node for value 123.The leaf node will contain either the entire row of data or a pointer to that row, depending on whether the index is clustered or nonclustered. Clustered Indexes A clustered index stores the actual data rows at the leaf level of the index. Returning to the example above, that would mean that the entire row of data associated with the primary key value of 123 would be stored in that leaf node. An important characteristic of the clustered index is that the indexed values are sorted in either ascending or descending order.As a result, there can be only one clustered index on a table or view. In addition, data in a table is sorted only if a clustered index has been defined on a table. Note:  A table that has a clustered index is referred to as a  clustered table. A table that has no clustered index is referred to as a  heap. Nonclustered Indexes Unlike a clustered indexed, the leaf nodes of a nonclustered index contain only the values from the indexed columns and row locators that point to the actual data rows, rather than contain the data rows themselves.This means that the query engine must t ake an additional step in order to locate the actual data. A row locator’s structure depends on whether it points to a clustered table or to a heap. If referencing a clustered table, the row locator points to the clustered index, using the value from the clustered index to navigate to the correct data row. If referencing a heap, the row locator points to the actual data row. Nonclustered indexes cannot be sorted like clustered indexes; however, you can create more than one nonclustered index per table or view.SQL Server 2005 supports up to 249 nonclustered indexes, and SQL Server 2008 support up to 999. This certainly doesn’t mean you should create that many indexes. Indexes can both help and hinder performance, as I explain later in the article. In addition to being able to create multiple nonclustered indexes on a table or view, you can also add  included columns  to your index. This means that you can store at the leaf level not only the values from the indexed column, but also the values from non-indexed columns. This strategy allows you to get around some of the limitations on indexes.For example, you can include non-indexed columns in order to exceed the size limit of indexed columns (900 bytes in most cases). Index Types In addition to an index being clustered or nonclustered, it can be configured in other ways: * Composite index:  An index that contains more than one column. In both SQL Server 2005 and 2008, you can include up to 16 columns in an index, as long as the index doesn’t exceed the 900-byte limit. Both clustered and nonclustered indexes can be composite indexes. * Unique Index:  An index that ensures the uniqueness of each value in the indexed column.If the index is a composite, the uniqueness is enforced across the columns as a whole, not on the individual columns. For example, if you were to create an index on the FirstName and LastName columns in a table, the names together must be unique, but the individual n ames can be duplicated. A unique index is automatically created when you define a primary key or unique constraint: * Primary key:  When you define a primary key constraint on one or more columns, SQL Server automatically creates a unique, clustered index if a clustered index does not already exist on the table or view.However, you can override the default behavior and define a unique, nonclustered index on the primary key. * Unique:  When you define a unique constraint, SQL Server automatically creates a unique, nonclustered index. You can specify that a unique clustered index be created if a clustered index does not already exist on the table. * Covering index:  A type of index that includes all the columns that are needed to process a particular query. For example, your query might retrieve the FirstName and LastName columns from a table, based on a value in the ContactID column.You can create a covering index that includes all three columns. Teradata What is the Teradata R DBMS? The Teradata RDBMS is a complete relational database management system. With the Teradata RDBMS, you can access, store, and operate on data using Teradata Structured Query Language (Teradata SQL). It is broadly compatible with IBM and ANSI SQL. Users of the client system send requests to the Teradata RDBMS through the Teradata Director Program (TDP) using the Call-Level Interface (CLI) program (Version 2) or via Open Database Connectivity (ODBC) using the Teradata ODBC Driver.As data requirements grow increasingly complex, so does the need for a faster, simpler way to manage data warehouse. That combination of unmatched performance and efficient management is built into the foundation of the Teradata Database. The Teradata Database is continuously being enhanced with new features and functionality that automatically distribute data and balance mixed workloads even in the most complex environments.Teradata Database 14  currently offers low total cost of ownership in a simple, scalable, parallel and self-managing solution. This proven, high-performance decision support engine running on the  Teradata Purpose-Built Platform Family offers a full suite of data access and management tools, plus world-class services. The Teradata Database supports installations from fewer than 10 gigabytes to huge warehouses with hundreds of terabytes and thousands of customers. Features & BenefitsAutomatic Built-In Functionality  | Fast Query Performance  | â€Å"Parallel Everything† design and smart Teradata Optimizer enables fast query execution across platforms| | Quick Time to Value  | Simple set up steps with automatic â€Å"hands off† distribution of data, along with integrated load utilities result in rapid installations| | Simple to Manage  | DBAs never have to set parameters, manage table space, or reorganize data| | Responsive to Business Change  | Fully parallel MPP â€Å"shared nothing† architecture scales linearly across data, us ers, and applications providing consistent and predictable performance and growth| Easy Set & G0† Optimization Options  | Powerful, Embedded Analytics  | In-database data mining, virtual OLAP/cubes, geospatial and temporal analytics, custom and embedded services in an extensible open parallel framework drive efficient and differentiated business insight| | Advanced Workload Management  | Workload management options by user, application, time of day and CPU exceptions| | Intelligent Scan Elimination  | â€Å"Set and Go† options reduce full file scanning (Primary, Secondary, Multi-level Partitioned Primary, Aggregate Join Index, Sync Scan)| Physical Storage Structure of Teradata Teradata offers a true hybrid row and Column database.All database management systems constantly tinker with the internal structure of the files on disk. Each release brings an improvement or two that has been steadily improving analytic workload performance. However, few of the key player s in relational database management systems (RDBMS) have altered the fundamental structure of having all of the columns of the table stored consecutively on disk for each record. The innovations and practical use cases of â€Å"columnar databases† have come from the independent vendor world, where it has proven to be quite effective in the performance of an increasingly important class of analytic query.These columnar databases store data by columns instead of rows. This means that all values of a single column are stored consecutively on disk. The columns are tied together as â€Å"rows† only in a catalog reference. This gives a much finer grain of control to the RDBMS data manager. It can access only the columns required for the query as opposed to being forced to access all columns of the row. It’s optimal for queries that need a small percentage of the columns in the tables they are in but suboptimal when you need most of the columns due to the overhead in a ttaching all of the columns together to form the result sets.Teradata 14 Hybrid Columnar The unique innovation by Teradata, in Teradata 14, is to add columnar structure to a table, effectively mixing row structure, column structures and multi-column structures directly in the DBMS which already powers many of the largest data warehouses in the world. With intelligent exploitation of Teradata Columnar in Teradata 14, there is no longer the need to go outside the data warehouse DBMS for the power of performance that columnar provides, and it is no longer necessary to sacrifice robustness and support in the DBMS that holds the post-operational data.A major component of that robustness is parallelism, a feature that has obviously fueled much of Teradata’s leadership position in large-scale enterprise data warehousing over the years. Teradata’s parallelism, working with the columnar elements, are creating an entirely new paradigm in analytic computing – the pinpoint accuracy of I/O with column and row partition elimination. With columnar and parallelism, the I/O executes very precisely on data interesting to the query. This is finally a strong, and appropriate, architectural response to the I/O bottleneck issue that analytic queries have been living with for a decade.It also may be Teradata Database’s most significant enhancement in that time. The physical structure of each container can also be in row (extensive page metadata including a map to offsets) which is referred to as â€Å"row storage format,† or columnar (the row â€Å"number† is implied by the value’s relative position). Partition Elimination and Columnar The idea of data division to create smaller units of work as well as to make those units of work relevant to the query is nothing new to Teradata Database, and most DBMSs for that matter.While the concept is being applied now to the columns of a table, it has long been applied to its rows in the form of partitioning and parallelism. One of the hallmarks of Teradata’s unique approach is that all database functions (table scan, index scan, joins, sorts, insert, delete, update, load and all utilities) are done in parallel all of the time. There is no conditional parallelism. All units of parallelism participate in each database action. Teradata eliminates partitions from needing I/O by reading its metadata to understand the range of data placed into the partitions and eliminating those that are washed out by the predicates.See Figure There is no change to partition elimination in Teradata 14 except that the approach also works with columnar data, creating a combination row and column elimination possibility. In a partitioned, multi-container table, the unneeded containers will be virtually eliminated from consideration based on the selection and projection conditions of the query. See Figure Following the column elimination, unneeded partitions will be virtually eliminated fro m consideration based on the projection conditions.For the price of a few metadata reads to facilitate the eliminations, the I/O can now specifically retrieve a much focused set of data. The addition of columnar elimination reduces the expensive I/O operation, and hence the query execution time, by orders of magnitude for column-selective queries. The combination of row and column elimination is a unique characteristic of Teradata’s implementation of columnar. Compression in Teradata Columnar Storage costs, while decreasing on a per-capita basis over time, are still consuming increasing budget due to the massive increase in the volume of data to store.While the data is required to be under management, it is equally required that the data be compressed. In addition to saving on storage costs, compression also greatly aids the I/O problem, effectively offering up more relevant information in each I/O. Columnar storage provides a unique opportunity to take advantage of a series of compression routines that make more sense when dealing with well-defined data that has limited variance like a column (versus a row with high variability. ) Teradata Columnar utilizes several compression methods that take advantage of the olumnar orientation of the data. A few methods are highlighted below. Run-Length Encoding When there are repeating values (e. g. , many successive rows with the value of ‘12/25/11’ in the date container), these are easily compressed in columnar systems like Teradata Columnar, which uses â€Å"run length encoding† to simply indicate the range of rows for which the value applies. Dictionary Encoding Even when the values are not repeating successively, as in the date example, if they are repeating in the container, there is opportunity to do a dictionary representation of the data to further save space.Dictionary encoding is done in Teradata Columnar by storing compressed forms of the complete value. The dictionary representatio ns are fixed length which allows the data pages to remain void of internal maps to where records begin. The records begin at fixed offsets from the beginning of the container and no â€Å"value-level† metadata is required. This small fact saves calculations at run-time for page navigation, another benefit of columnar. For example, 1=Texas, 2=Georgia and 3=Florida could be in the dictionary, and when those are the column values, the 1, 2 and 3 are used in lieu of Texas, Georgia and Florida.If there are 1,000,000 customers with only 50 possible values for state, the entire vector could be stored with 1,000,000 bytes (one byte minimum per value). In addition to dictionary compression, including the â€Å"trimming†8 of character fields, traditional compression (with algorithm UTF8) is made available to Teradata Columnar data. Delta Compression Fields in a tight range of values can also benefit from only storing the offset (â€Å"delta†) from a set value. Teradata Co lumnar calculates an average for a container and can store only the offsets from that value in place of the field.Whereas the value itself might be an integer, the offsets can be small integers, which double the space utilization. Compression methods like this lose their effectiveness when a variety of field types, such as found in a typical row, need to be stored consecutively. The compression methods are applied automatically (if desired) to each container, and can vary across all the columns of a table or even from container to container within a column9 based on the characteristics of the data in the container.Multiple methods can be used with each column, which is a strong feature of Teradata Columnar. The compounding effect of the compression in columnar databases is a tremendous improvement over the standard compression that would be available for a strict row-based DBMS. Teradata Indexes Teradata provides several indexing options for optimizing the performance of your relati onal databases. i. Primary Indexes ii. Secondary Indexes iii. Join Indexes iv. Hash Indexes v. Reference Indexes Primary Index Primary index determines the distribution of table rows on the disks controlled by AMPs.In Teradata RDBMS, a primary index is required for row distribution and storage. When a new row is inserted, its hash code is derived by applying a hashing algorithm to the value in the column(s) of the primary code (as show in the following figure). Rows having the same primary index value are stored on the same AMP. Rules for defining primary indexes The primary indexes for a table should represent the data values most used by the SQL to access the data for the table. Careful selection of the primary index is one of the most important steps in creating a table.Defining primary indexes should follow the following rules: * A primary index should be defined to provide a nearly uniform distribution of rows among the AMPs, the more unique the index, the more even the distrib ution of rows and the better space utilization. * The index should be defined on as few columns as possible. * Primary index can be either Unique or non-unique. A unique index must have a unique value in the corresponding fields of every row;   a non-unique index permits the insertion of duplicate field values. The unique primary index is more efficient. Once created, the primary index cannot be dropped or modified, the index must be changed by recreating the table. If a primary index is not defined in the CREATE TABLE statement through an explicit declaration of a PRIMARY INDEX, the default is to use one of the following: * PRIMARY key * First UNIQUE constraint * First column The primary index values are stored in an integral part of the primary table. It should be based on the set selection most frequently used to access rows from a table and on the uniqueness of the value.Secondary Index In addition to a primary index, up to 32 unique and non-unique secondary indexes can be def ined for a table. Comparing to primary indexes, Secondary indexes allow access to information in a table by alternate, less frequently used paths. A secondary index is a subtable that is stored in all AMPs, but separately from the primary table. The subtables, which are built and maintained by the system, contain the following; * RowIDs of the subtable rows * Base table index column values * RowIDs of the base table rows (points)As shown in the following figure, the secondary index subtable on each AMP is associated with the base table by the rowID . Defining and creating secondary index Secondary index are optional. Unlike the primary index, a secondary index can be added or dropped without recreating the table. There can be one or more secondary indexes in the CREATE TABLE statement, or add them to an existing table using the CREATE INDEX statement or ALTER TABLE statement. DROP INDEX can be used to dropping a named or unnamed secondary index.Since secondary indexes require subtab les, these subtables require additional disk space and, therefore, may require additional I/Os for INSERTs, DELETEs, and UPDATEs. Generally, secondary index are defined on column values frequently used in WHERE constraints. Join Index A join index is an indexing structure containing columns from multiple tables, specifically the resulting columns form one or more tables. Rather than having to join individual tables each time the join operation is needed, the query can be resolved via a join index and, in most cases, dramatically improve performance.Effects of Join index Depending on the complexity of the joins, the Join Index helps improve the performance of certain types of work. The following need to be considered when manipulating join indexes: * Load Utilities  Ã‚  Ã‚   The join indexes are not supported by MultiLoad and FastLoad utilities, they must be dropped and   recreated after the table has been loaded. * Archive and Restore  Ã‚  Ã‚   Archive and Restore cannot be us ed on join index itself. During a restore of   a base table or database, the join index is marked as invalid.The join index must be dropped and recreated before it can be used again in the execution of queries. * Fallback Protection  Ã‚  Ã‚   Join index subtables cannot be Fallback-protected. * Permanent Journal Recovery  Ã‚  Ã‚   The join index is not automatically rebuilt during the recovery process. Instead, the join index is marked as invalid and the join index must be dropped and recreated before it can be used again in the execution of queries. * Triggers  Ã‚  Ã‚   A join index cannot be defined on a table with triggers. Collecting Statistics  Ã‚  Ã‚   In general, there is no benefit in collecting statistics on a join index for joining columns specified in the join index definition itself. Statistics related to these columns should be collected on the underlying base table rather than on the join index. Defining and creating secondary index Join indexes can be create d and dropped by using CREATE JOIN INDEX and DROP JOIN INDEX statements. Join indexes are automatically maintained by the system when updates (UPDATE, DELETE, and INSERT) are performed on the underlying base tables.Additional steps are included in the execution plan to regenerate the affected portion of the stored join result. Hash Indexes Hash indexes are used for the same purposes as single-table join indexes. The principal difference between hash and single-table join indexes are listed in the following table. Hash indexes create a full or partial replication of a base table with a primary index on a foreign key column table to facilitate joins of very large tables by hashing them to the same AMP. You can define a hash index on one table only.The functionality of hash indexes is a superset to that of single-table join indexes. Hash indexes are not indexes in the usual sense of the word. They are base tables that cannot be accessed directly by a query. The Optimizer includes a has h index in a query plan in the following situations. * The index covers all or part of a join query, thus eliminating the need to redistribute rows to make the join. In the case of partial query covers, the Optimizer uses certain implicitly defined elements in the hash index to join it with its underlying base table to pick up the base table columns necessary to complete the cover. A query requests that one or more columns be aggregated, thus eliminating the need to perform the aggregate computation For the most part, hash index storage is identical to standard base table storage except that hash indexes can be compressed. Hash index rows are hashed and partitioned on their primary index (which is always defined as non-unique). Hash index tables can be indexed explicitly, and their indexes are stored just like non-unique primary indexes for any other base table.Unlike join indexes, hash index definitions do not permit you to specify secondary indexes. The major difference in storage between hash indexes and standard base tables is the manner in which the repeated field values of a hash index are stored. Reference Indexes A reference index is an internal structure that the system creates whenever a referential integrity constraint is defined between tables using a PRIMARY KEY or UNIQUE constraint on the parent table in the relationship and a REFERENCES constraint on a foreign key in the child table.The index row contains a count of the number of references in the child, or foreign key, table to the PRIMARY KEY or UNIQUE constraint in the parent table. Apart from capacity planning issues, reference indexes have no user visibility. References for Teradata http://www. teradata. com/products-and-services/database/ http://teradata. uark. edu/research/wang/indexes. html http://www. teradata. com/products-and-services/database/teradata-13/ http://www. odbms. org/download/illuminate%20Comparison. pdf