.

Monday, September 30, 2019

Edgar Allen Poe’s Tribute

The poem â€Å"Annabel Lee† by Edgar Allen Poe is written to tell the story of the speaker's greatest love. The speaker and Annabel Lee loved each other with â€Å"a love that was more than love† until she fell ill and died (9). The speaker blames the angels for killing his darling and proves his love for her by attending her graveside every day for the rest of his life.One way the speaker demonstrates his love is by describing their home (the setting of the poem) as a â€Å"kingdom by the sea† (2). This means the speaker sees himself as royalty because the love he and Annabel Lee share makes him so incredibly wealthy and powerful. This power and wealth was so great, in fact, that â€Å"heaven coveted† the love about which Edgar Allen Poe wrote (10). The angels were jealous of this love being shared on earth, which was apparently more wonderful than anything they had experienced in heaven as angels. The use of the word â€Å"coveted† implies a darker meaning. This was not the simple jealousy of a teenage girl. The angels were committing a sin, breaking one of the commandments of their Divine Master by coveting the love between two of His children. Finally, the speaker's grief at her death further implies the depth and strength of their love. It is logical that the greater the love, the greater the grief; the inverse is also true: the greater the grief, the greater the love. Instead of merely being laid to rest in a coffin or a grave, death â€Å"shut her up in a sepulcher† there â€Å"by the sea† (19, 40). Sepulcher brings such dark connotations that we can almost see the speakershrouded in black after her death, mourning as deeply as the seanext to her tomb.Edgar Allen Poe contributed to the extremity of the poem by using a tone of reverence and pride. This is not some silly poem about puppy love. The love shared by Annabel Lee and the speaker was serious, and seems to be one we can only refer to with a sense of sobriety and admiration. In line 28, the speaker refers to his pride by comparing himself to those older and wiser, saying that hehad experienced a love that â€Å"was stronger by far† than anything those older and wiser had experienced. The

Sunday, September 29, 2019

My Journey to America

My journey to America is one unforgettable experience. I say this not because I was able to travel to the land of my dreams but mainly because the journey had improved my outlook in life through the many lessons and insights it had taught. My country of origin is Kenya, located in Eastern Africa. Separating these two countries is the Pacific Ocean,  Ã‚   a large body of water that seemed to signify the impossibility of my coming here (Crofton, 1994, p. 434). But fortunately, this huge obstacle was overcome, and now I am enjoying the fun and opportunities offered by a country that had only once been a dream. You just cannot imagine the excitement I had felt when I learned that I will travel to America. For the majority of Africans, America is a land of golden opportunity, a place that one can better himself. It offers a rare experience in advancement in technology, an essential factor to a person’s twentieth century learning. And above all, America had many fun, exciting and historical places to visit. Armed with such lofty thoughts in mind, going through the hassle of filing for travel papers at the US Embassy meant nothing to me. I  Ã‚   bade goodbye to Kenya last __________. My itinerary was from Nairobi, Kenya to ______, USA. It would take approximately ______ hours to reach the US.   On the plane I tried to contain my excitement although flying above the wide Pacific Ocean was both thrilling and nerve-wracking. Looking down below from the window of the plane, I can see a wide expanse of blue water, stretching miles and miles beyond as if it would never end. Although the sight was beautiful to behold, I cannot wait to see land for by then I suddenly felt an awesome fear of being in a place totally unknown, as if I was lost in the middle of nowhere. I realized then that uprooting oneself from familiar places is not that easy, after all. As the plane made its way across the earth,   my mind was filled with thoughts of what I will going to do in America, the friends that I will meet, the places I will visit and the things that I will have. The thoughts all came to me at once, both thrilling and enchanting me at the same time. Amidst   these thoughts, my mind   raced back to the place I just left behind, the loved ones that I will not see   for sometime and   the places that surprisingly I will miss, and then out of nowhere I felt nostalgic and if I was not strong   enough tears would have fallen from my eyes. It was painful to realize that for me to experience something new, I had to let go of something that I hold dear. But I am glad to think that someday I will be back. Finally, after some time I approached the place that for more than a century had drawn all kinds of races and nationalities to its shores like a magnet. I expected the view from above to be so much different from the aerial view of the country and continent I just left behind. True enough it was way, way different. Whereas the place I had just left was dominated by forest and desert that was spotted with human dwellings and buildings, the view of America from the air was just breathtaking to me. A vast array of buildings endlessly crowded themselves below, tall skyscrapers and the Statue of Liberty seemed to reach out to me in welcome. I could not believe that the famous  Ã‚   America was right in front of me!   Ã‚  I felt an awesome sense of self-fulfillment then, perhaps because America had this magical way of making a person think that finally he had seen the real world. Naturally I could not wait to set my feet in the America soil.   For the first time in my life I was surrounded with people who look different from me. In the airport were white people, brown skinned people and dark ones like me. I was fascinated with the presence of different nationalities that were busy talking to one another in a language I do not understand and hurrying past me to places God knows where. I realized later that I will often come across these different types of people in just any street in America. I wonder much about them, the places that they came from and the loved ones that they left behind in a country far away. I know that most of them come to America to earn more money, and I wonder if they fulfilled that goal. Settling here for _________ (months or years) now, I can say with authority that foreigners here oftentimes experienced physical, emotional and mental suffering. Many of them are homesick.   There is no place, after all, that man will be untouched by the negative experiences of life. Such realization had developed in   me a deeper respect for people who   left their homeland to find jobs in other shores. For the many successful people who pass my way daily, I can say that indeed America is a land of golden opportunity, but only for the people who work hard. In life no matter where we are we just cannot expect a golden egg to fall on our lap. Here there are also many poor people. I do not know why they lived that way but one thing America taught me is that if you want something good to happen in your life then you must be willing to sacrifice, to let many sweats fall from your forehead.   Ã‚  A person should not wait for opportunity, he must look for it and when he found that opportunity he must grab it with both hands. The list of beautiful places to visit and exciting things to do in America is endless. There are many big parks decorated with beautiful flowers and housed different kinds of animals (many of these animals are native to my homeland like tigers and elephants).   When I look a these caged animals I cannot help but compare them to their relatives back home who roamed   freely in the African wild. America is not a place of freedom for them but on the contrary, America is a trap, a prison. I felt sad but these are the prices that need to be paid if people of America are to see a live African wildlife. Furthermore, America had many large shopping centers filled with all sorts of stuff. They are very inviting to touch, look and if I have the money, to buy. There are just many things to choose from, all of them beautiful. It is true that America has lots of things to offer especially for a Kenyan like me. Being surrounded by all these beautiful things make me look at life more positively. I do have negative experiences in America but I would cite only one that I know is experienced by most foreign people like me and it is the ugly face of racial discrimination. I know that racial discrimination is present when the people of other races I associate with treat me with distrust without reason. I know some of them did not actually wanted the feeling to come but it involuntarily sprung up somewhere. Racial discrimination is a big problem here.   This just shows that great America, like any other country in the world, had its own issues to solve. Obviously, there are many differences between America and my home country but there is one difference that I would like to share here. It seemed to me that the people who live in America are always in a hurry. It seemed that they have so many things to do but do not have the time to do it. That is why most Americans (including those who are not but lived here) are always in a perpetual state of stress. They are busy chasing â€Å"something† so that they do not have the time â€Å"to smell the flowers†. This is so much unlike Kenya. There it seemed we have plenty of time to rest, to reflect. Perhaps because our lives are less complicated, our dreams are simpler. For most Kenyan families providing daily food on the table is a big accomplishment. My journey to America is an adventure of a lifetime. I will never forget the many new   experiences that came along     and the lessons it had taught me. Indeed I can say that my journey to America had opened doors in my life that were once closed. Reference Crofton, Ian (editor). (1994). The Guinness Compact Encyclopedia. London: Guinness Publishing Limited.

Saturday, September 28, 2019

Astronmy Assignment Example | Topics and Well Written Essays - 250 words

Astronmy - Assignment Example 58-60). Ursa Major has also moved downwards and has crossed the meridian and its pattern has also changed. There is unnoticeable change after ten minutes repeat of the observation (Hale, p. 60). Less change could be recorded after ten minutes. From these observations, we can conclude that Polaris is always on the meridian and a star that never sets and can always be seen throughout the day and night at the same position. Also Ursa Minor and major rotate as the sky rotates and will set at some point and cannot be seen throughout the day and night (Hale, p. 59). The nest observation was at 02: 29. Star Polaris had not changed the position but remained on the meridian line. It is acting as the tilt point of Ursa Minor. However, less change has been observed with Ursa Minor (Pasachoff and Filippenko, p. 80). Its position has changed a bit as it is on the meridian line but the pattern has not changed. On the contrary, Ursa Major is also changing in its position but the pattern has remained the same (Hale, p. 60). It is slowly approaching the west side and still the same stars seen in its pattern at the beginning of the observation could still be seen which are Mizar, Alkali, and Dub he (Pasachoff and Filippenko, p.

Friday, September 27, 2019

The Great Railroad Strike 1877 Research Paper Example | Topics and Well Written Essays - 750 words

The Great Railroad Strike 1877 - Research Paper Example strike failed to a considerable level, but it evoked labour upheaval, social change, political mainstream and organization among the American labourers. The workers at Ohio and Baltimore railroad went on strike because their wages were reduced twice over the previous year. The striking workers refused to let the trains run until all the pay cut were returned to respective employees affected.1 Following the civil war, railway industry was the leading sector in industrial growth. The development of several railway lines was fuelled by the government grants and subsidies during 1870s, making the railroad become the largest commercial sector in USA. The Chicago Tribune termed it as â€Å"the very heart and life of the modern system of commercial existence.† As the expansion of the railway continued, their economic and political power grew dramatically. In contrast, the workers in the railroad industry lacked political as well as an economic power. An increasing influx of workers from Europe and rural areas to the city meant that labour supply surpassed demand. Since an individual worker was not considered as valuable commodity, several companies rarely had a good incentive to respond to the workers needs. In addition, there were little ideologies of solidarity among workers and labour unions were being viewed as criminal gang organizations. Some existing unions were u npopular with the public and were not able to execute their duties because they feared facing legal challenges. The problem faced by some labour movement intensified in 1873 when economic panic erupted. The unemployment rate was at 25% and the value of an individual worker dropped considerably. Moreover, as the government continued cutting the wages, the unions and workers revolted and fought back. Railroad workers faced the harsh working environment and were not able to collectively respond to the persistent wage cut. As corporations suffered consistently, reduced revenues and economic depression

Thursday, September 26, 2019

Realist Theories of IR Essay Example | Topics and Well Written Essays - 500 words

Realist Theories of IR - Essay Example The strong point of this view is the accommodation of the element of conflict that exists even within an individual. Human nature is constantly waging war against itself with the desires of a person mostly conflicting with outside influences such as learning. A political realist is holistic in thought because of the acknowledgement of the existence and relevance of standards other than the political ones. The political realist refutes the â€Å"legalistic moralistic approach† to international politics and cannot subordinate standards of other schools of thought to those of politics. The realists vehemently defend the autonomy of the political sphere against its subversion by other modes of thought without disregarding their existence and importance (Morgenthau 14). Kenneth Waltz also had a realist perspective to international relations which he called neorealism or structural or defensive realism. This theorist used the turmoil inherent in international relations to restrict the global â€Å"net† to its classical international component (Waltz 29). From the neorealist approach examination of the structures of international systems is the best way to understand international politics. The structures of the international system are reflected alliances and other cooperative arrangements between nations (Mearsheimer 32). The polarity of the system becomes the key factor in international relations; and depending on the number of dominant superpowers, a system might be unipolar bipolar or multipolar. John Measheimer took a different view which he referred to as â€Å"offensive realism†. Measheimer’s perspective follows on the principles of Kenneth Waltz’s theory to utilize the â€Å"structure† of the international system to derive the behavior of states (Mearsheimer 25). The theoretical foundation of Measheimer outlines that: the international system in anarchic, all

Wednesday, September 25, 2019

Connectivity Essay Example | Topics and Well Written Essays - 500 words

Connectivity - Essay Example In addition, in current business management arrangement we need to establish a centralized business operational management and supply chain management framework. This aspect would lead towards better business performance and corporate management. For the establishment of such a huge network we can have three imperative alternatives. These alternatives could comprise choices of physical transmission options, wireless connectivity options and satellite or Point-to-Point options for the overall business connectivity features. While talking about our primary option (i.e. physical transmission media) we are able to establish a huge MAN layout that is based on fiber optic based communication medium to establish a connection among all the chains of Star Clothing Business. However, this alternative is considered to be extremely costly. Additionally, this technology based connectively will definitely require massive investment and will excessively augment the expenses of overall network arran gement. Thus, in case of business store distribution all through the nation it becomes almost impractical to establish and adopt such kind of network technology.

Tuesday, September 24, 2019

Current Issue Analysis Essay Example | Topics and Well Written Essays - 1000 words

Current Issue Analysis - Essay Example And today, according to the official terminology, Indian women continue to belong to the so-called weak groups. In my opinion such misunderstanding of the role of feminism and the importance of the defending of women’s rights takes place because of the lack of support from men’s and boys’ side. Emma Watson, who after she grew up, became not just a â€Å"Harry Potter Girl† but also UN Women Goodwill Ambassador at the HeForShe Campaign 2014 paid attention that fighting for women’s rights too often comes synonymous to men hating. And this has to stop. It is a real problem that women are choosing not to identify themselves as feminists. It is right to make possible for women to make decisions about own body. Women should be afforded the same respect as men. But there is no country in the world, where women may be sure they would receive these rights (Watson, 2014). Some days ago the Dailymail has published some shocking videos of the women being gang raped. The most awful is that the men caught on camera were smiling while carrying out the alleged attacks. And now this movement called #shametherapistcampaign fights against repeated incidents of extreme sexual violence towards Indian women. Sunitha Krishnan through her organisation Prajwala India, a womens rights NGO based in Hyderabad, India released the videos and images of the alleged attackers and in such way her #shametherapistcampaign was born. She got those videos from a concerned man, who â€Å"forwarded her the videos after they were sent to him on the messaging service WhatsApp†. After the campaign was launched on Indian national television, somebody tried to intimidate Mrs Krishnan by throwing rocks into the window of her vehicle (Charlton, 2015). Thus the issue of defending women rights is particularly important for Indian women. They are discriminated from infancy or even earlier, and I can prove this statement with lots of examples. In

Monday, September 23, 2019

Terrorism and Homeland Security Quiz Assignment

Terrorism and Homeland Security Quiz - Assignment Example Terrorism and Homeland Security Quiz Force multipliers, in terrorism, is the process and means that aid in augmented activities in terrorism. The five types of force multipliers in terrorism are: religion, media, transitional support, technology and recruitments as discussed. Religion offers an auspicious environment for member groups to have a common ideology towards a certain goal. Religion offer doctrines used to recruit and train new members. Members have to respect the belief systems of the group such that they can even sacrifice their bodies in order to uphold the teachings. The terrorism groups use various religions such as Islam through quoting verses in the Quran that encourage killings. (Chandra, 2003). The terrorism groups use media to convey their message to the wider world, which enhances their survival, and as a means to elicit fear to the populace (Hamm, 2007). The terrorism groups use media to convey a message of ill intentions that lead to fluctuations in moral behaviors, in the society. The transitional support provides an opportunity to the terrorist to carry out an attack through offering financial support and military empowerment (Hamm, 2007). The terrorist groups use this support to deviate from the law; thus establishing their guidelines in order to control various localities within their dispensation in order to enhance their dominance. Terrorism takes advantage of the rapid advance of technology in the contemporary world, which allows inventions of explosive devices that terrorist groups use to make sophisticated bombs. Terrorist inculcate various technological skills in their training in order to make such explosive devices.

Sunday, September 22, 2019

Oracle Corporation Essay Example for Free

Oracle Corporation Essay The Central Intelligence Agency had commissioned the project to build a commercial database management system for IBM mainframe computers and code-named it Oracle. Software Development Laboratories took the Oracle name in 1982. After completion of the project, Ellison, Miner, Oates, and Scott had a vision of developing and distributing their database software as a profitable business opportunity. From 1982 to 1986, Oracle had achieved 100% growth. On March 15th, 1986, Oracle went public, one day after Microsoft’s initial public offering. From 1986 to 1989, revenues skyrocketed from $55 million to $584 million, making it one of the largest independent software companies in the world, employing over 4,000 people in 24 countries. The Oracle Corporation’s objective of becoming a profitable database software company had been achieved. Market and industry growth continued until the third quarter of 1990. Oracle suffered a $15 million dollar loss on $240 million in revenues. Between 1988 and 1991, operating margins had plummeted from 23 to 3 percent. During this time, the company’s stock value also fell. Oracle responded by letting go of 400 employees in the United States and reorganizing its senior management team. This business problem was the direct result of something the company simply overlooked. As the company was focusing all of its energies on growth during the late 1980’s, they were losing sight of their internal operations and infrastructure. They also planned their expenses based on the 100% annual growth rate they experienced in the prior years, causing them to lose money. In addition, they delayed the delivery of their latest product, which allowed the competition to draw closer to them. However, the release of their next product would see Oracle quickly rebound and turn things back around. In July of 1991, Oracle was working on a new database software that had the ability to manage text, video, audio, and other data through a set of loosely connected servers. This database software was called Oracle 7, and was one of many IT solutions that would put Oracle ahead of the competition and save the company. 996 saw database sales grow by 20 percent and then to 10 percent in 1997, the year Microsoft released its rival SQL server, which was a cheaper alternative database release with aspirations of stealing Oracle’s market share. During this time, Oracle attempted to expand beyond databases and entered into the two largest application software markets, enterprise resource planning and customer relationship management. Ellison saw this as a lucrative business opportunity, considering the fact that the ERP market was estimated at $20 billion in 1999 and projected to exceed $65 billion by 2003. The CRM market was estimated at $4 billion in 1999 and projected to exceed $16 billion by 2003. Ellison recognized that CEO’s wanted to understand profitability per costumer and to be able to detect dissatisfaction before the customer leaves. He realized that ERP and CRM software would allow CEO’s to do that by turning database information into knowledge about consumers. Ellison’s vision of internet-enabled software began to take shape in 1999 with the release of Oracle8i. It was followed by internet-enabled versions of all the company’s key software products. A key IS solution in the development of Oracle Corporation would be Oracle e-Business Suite, which would include a collection of ERP and CRM applications that automated many necessary business functions. This would be the beginning of the high impact IS solutions to follow. In June of 1999, Ellison declared that Oracle would attempt to save $1billion dollars by the end of 2000 by transforming into an e-business. Ellison then eliminated all non-e-business options from the company. This bold move was an incredible success and a brilliant IS solution to some of the company’s business problems. The changes were easy and smooth to implement. An example given in the case was that of an expense report. In the past, a sales rep would fill out an expense report and manually send it to headquarters. Now the sales rep just completes the forms on the web where the report can be tracked. Not only did this create $6 million dollars in direct savings, the reports were easier and faster to complete. This solution did not only benefit employees, but customers, too. In the past if a customer wanted to demo Oracle’s software, a sales rep had to set an appointment to do the demo in person. Now, the sales rep can gain access to the customer’s browser and, over the phone, can do the demo over the browser at Oracle. com. The shift to self-service was a very necessary and profitable solution for Oracle. They began saving millions of dollars and hours of time. Another business problem Oracle had was a lack of centralization in the business. One clever way they did this was by changing incentives for country managers. Country manager’s incentives were originally based on revenue. This was to be changed to shift their incentives to be based on margin. In the past, 97 e-mail servers existed with almost 120 databases in over 50 countries. This was dramatically reduced when Oracle gave each country CEO a choice. They could receive free e-mail through Redwood Shores or pay to service an e-mail server, which would directly impact their margin, and ultimately, their variable pay. This was a very effective IS solution to the lack of centralization problem the business had. Oracle would continue to centralize the business by pulling human resources, legal, sales administration, and marketing out of each country office and consolidating them at Redwood Shores. Oracle now had a single system that served everything. Oracle saved a lot of wasted money by centralizing its marketing department. The products were the same in every country, so the centralization made sense and was absolutely necessary. By June of 2000, Oracle had gone from 63 to 17 company websites worldwide. By August 2000, the company was down to one website, Oracle. com. This solution saved the company a lot of money that was being wasted operating multiple websites for multiple countries and confusing the brand with different languages, colors, and logos. The transformation to e-business saved Oracle a ton of money, but this wasn’t the only benefit of the move. The switch also generated marketing pull. Oracle’s customer base grew as a result of having better information about their customers’ and sales outlets. The pull strategy came to fruition by two combining factors. The story of the company’s transformation combined with the new gained credibility the company received by performing this transformation so publicly. Now instead of sales reps attempting to sell the CEO of another company their software, CEO’s were going directly to Oracle technology to transform their own businesses. This pull allowed Oracle to open an online store, as opposed to hiring more sales people to handle the increased demand. This latest IS solution, in turn, created more sales. In 1999, Oracle began streamlining its Oracle University, which supported 2500 full-time employees in 143 countries while enrolling about 500,000 students annually. These Oracle courses led to the certification of developers and programmers that the company needed to continue growth. This business solution was yet another great move designed to farm their own employees. iLearning technology was then created as a means of a continuing education extension to Oracle University’s certification process. This software would be hosted online and could be updated daily without patches. Oracle Corporation is a great example of a company who had the ability to predict the future of technology and make innovations to lead the industry. They took risks, and they paid off. Larry Ellison took a big risk when he eliminated all non-e-business elements out of his business and made the transformation to e-business, and his company was rewarded with tremendous cost savings and higher revenues. He also predicted at the end of a June 2000 press conference that the software industry would vanish and be replaced by a service industry. This remains to fully be seen, but it appears there could be truth to this. Cloud computing has been the next innovation in computer technology, as we say many companies now providing services that used to require us to install software on our computers.

Saturday, September 21, 2019

Hedda Gabler Essay Essay Example for Free

Hedda Gabler Essay Essay In Ibsen’s drama Hedda Gabler, Hedda was a wealthy woman with a great background, until she marries Mr. Tesman. When she is chained down to this man she starts to become unstable and reveals how truly devilish she can be. From manipulating her loved ones to down-right killing them. These incidents occur because of jealousy and boredom. Hedda’s first act of despicableness is first presented when she talks to her husband’s aunt. She mentions that the maid will be unsuitable because â€Å"She’s left her old hat behind her on the chair. †, when really, it was Miss Tesman’s hat. We later find out when she is speaking to Brack that she had known all along it was her hat and just wanted to insult her. This shows how bored Hedda is where she feels the need to come up with something like that. She is also unhappy with her marriage so she doesn’t want to get close to any of her husbands family. In act 4 of the play Hedda gives Lovborg a pistol so he can â€Å"die beautifully†. She does this because she is still somewhat attached to Lovborg and is jealous of him and Thea’s relationship that was forming. She even starts to go somewhat mad after Lovborg and Thea leave. The manuscript begins to get ripped apart by Hedda, as she throws it into the fire saying, â€Å"I’m burning your child Thea!† This shows that she has basically reached a breaking point and has officially gone off the end. Hedda is a very hard character to play. This is because she is very contradicting, as Ibsen states, â€Å"sympathetically unsympathetic†. You feel sympathy for Hedda because she seems to be broken. She has been socially trapped into marriage and baring a child. Although this does not justify her actions which still keeps you scornful towards her. All in all Hedda is a very indifferent woman with a independence that she will not be taken away from her. She manipulates and deceives people in order to get her way. Yet she was slowly killing herself by doing so. This may be why her character is so hard to play, she is in a way, a very non-relatable character.

Friday, September 20, 2019

Developing a Learning Organisation: HRM

Developing a Learning Organisation: HRM You are a HRM manager in a global company. Your CEO has made it a strategic priority that the company should become a learning organisation. You have been asked by the CEO to manage this project. Discuss what is meant by a learning organisation, why it is important and as a HR manager how you would establish and develop a learning culture in the organisation. WHAT IS HUMAN RESOURCE MANAGEMENT? WHAT MAKES STRATEGIC HRM MORE STRATEGIC THAN HRM Strategic HRM has become topical in recent years but years but definitions as to what is meant by the term vary widely. XXX. Typically, strategic HRM bridges business strategy and HRM and focuses on the integration of HR with the business and its environment. The main rationale for strategic HRM thinking is that by integrating HRM with the business strategy, rather than HR strategies being a separate set of priorities, employees will be managed more effectively, organizational performance will improve and therefore business success will follow. This in itself may not be enough. Tony Grundy (1998) suggests: Human Resources Strategy in itself may not be effective. Integrating Corporate Strategy and HR matters into an Organization and People Strategy may prove more successful. Human resources management needs to be closely integrated with managerial planning and decision making (i.e., international human resources, forecasting, planning, and mergers and acquisitions). Increasingly, an organizations top management is aware that the time to consider organizational HRM strengths or limitations is when strategic organizational decisions are being formulated, not after critical policies have been decided. A closer integration between top managements goals and HRM practices helps to elicit and reward the types of behavior necessary for achieving an organizations strategy. For example, if an organization is planning to become known for its high-quality products, HRM staff should design appraisal and reward systems that emphasize quality in order to support this competitive strategy. Strategic HRM is an outcome, as organizational systems are designed to achieve sustainable competitive advantage through people. For others, however, SHRM is viewed as a process, the process of linking HR practices to business strategy (Armstrong, 2006). Strategic management of human resources includes HRM planning. The HRM planning process involves forecasting HRM needs and developing programs to ensure that the right numbers and types of individuals are available at the right time and place. Such information enables an organization to plan its recruitment, selection, and training strategies. For example, lets say an organizations HRM plan estimates that 12 additional information systems (IS) technicians will be needed during the next year. The organization typically hires recent IS graduates to fill such positions. Because these majors are in high demand, the organization decides to begin its recruiting early in the school year, before other organizations can snatch away the best candidates. WHAT IS AN LEARNING ORGANISATION According to Peter Senge (1990: 3) learning organizations are: à ¢Ã¢â€š ¬Ã‚ ¦organizations where people continually expand their capacity to create the results they truly desire, where new and expansive patterns of thinking are nurtured, where collective aspiration is set free, and where people are continually learning to see the whole together. A learning organization is simply put an organization that learns and encourages learning among its people and knowledge which innovates fast enough to survive and thrive in a rapidly changing environment. It provides exchange of information hence creating a more knowledgeable workforce. This produces a more flexible organization encouraging risk taking with new ideas, allows mistakes, learn from experience and adapt to new ideas and changes through a shared vision. Learning organizations are not simply the most fashionable or current management trend, they can provide work environments that are open to creative thought, and embrace the concept that solutions to ongoing work-related problems are available inside each and every one of us. All we must do is tap into the knowledge base, which gives us the ability to think critically and creatively, the ability to communicate ideas and concepts, and the ability to cooperate with other human beings in the process of inquiry and action (Navran Associates Newsletter 1993). (Navran Associates Newsletter.) THE FIVE DISCIPLINES Peter Senge is a leading writer in the area of learning organizations, whose seminal works The Fifth Discipline: The Art and Practice of the Learning Organization, and The Fifth Discipline Fieldbook: Strategies and Tools for Building a Learning Organization explain that there are five disciplines, which must be mastered when introducing such an organization: Shared Vision: The key vision question is What do we want to create together?. Taking time early in the change process to have the conversations needed to shape a truly shared vision is crucial to build common understandings and commitments, unleash peoples aspirations and hopes and unearth reservations and resistances.   Leaders learn to use tools such as Positive Visioning, Concept-shifting and Values Alignment to create a shared vision, forge common meaning/focus and mutually agree what the learning targets, improvement strategies and challenge-goals should be to get there. (Senge 1990: 9) Mental Models: One key to change success is in surfacing deep-seated mental models beliefs, values, mind-sets and assumptions that determine the way people think and act. Getting in touch with the thinking going on about change in your workplace, challenging or clarifying assumptions and encouraging people to reframe is essential.   Leaders learn to use tools like the Ladder of Inference and Reflective Inquiry to practise making their mental models clearer for each other and challenging each others assumptions in order to build shared understanding. (Senge 1990: 8) Personal Mastery is centrally to do with self-awareness how much we know about ourselves and the impact our behaviour has on others. Personal mastery is the human face of change to manage change relationships sensitively, to be willing to have our own beliefs and values challenged and to ensure our change interactions and behaviours are authentic, congruent and principled. Leaders learn to use tools like Perceptual Positions and Reframing to enhance the quality of interaction and relationship in and outside their teams. (Senge 1990: 139) Team Learning happens when teams start thinking together sharing their experience, insights, knowledge and skills with each other about how to do things better. Teams develop reflection, inquiry and discussion skills to conduct more skillful change conversations with each other which form the basis for creating a shared vision of change and deciding on common commitments to action. Its also about teams developing the discipline to use the action learning cycle rigorously in change-work.   Leaders learn to use tools like the Action-Learning Cycle and Dialogue to develop critical reflection skills and conduct more robust, skillful discussions with their teams and each other.   (Senge 1990: 10) Systems Thinking is a framework for seeing inter-relationships that underlie complex situations and interactions rather than simplistic (and mostly inaccurate) linear cause-effect chains. It enables teams to unravel the often hidden subtleties, influences, leverage points and intended/unintended consequences of change plans and programs and leads to deeper, more complete awareness of the interconnections behind changing any system. Leaders learn to use Systems Thinking Maps and Archetypes to map and analyse situations, events, problems and possible causes/courses of action to find better (and often not obvious) change options/solutions. (Peter Senge (1990: 23) THREE TYPES OF ORGANISATION LEARNING Single-Loop Learning Double-Loop Learning Triple-Loop Learning Are we doing things right? Are we doing the right things? How do we decide what is right? Single-Loop Learning Single-loop learning assumes that problems and their solutions are close to each other in time and space (thought they often arent). In this form of learning, we are primarily considering our actions. Small changes are made to specific practices or behaviors, based on what has or has not worked in the past. This involves doing things better without necessarily examining or challenging our underlying beliefs and assumptions. The goal is improvements and fixes that often take the form of procedures or rules. Single-loop learning leads to making minor fixes or adjustments, like using a thermostat to regulate temperature. Are we doing things right? Heres what to do-procedures or rules. Double-Loop Learning Double-loop learning leads to insights about why a solution works. In this form of learning, we are considering our actions in the framework of our operating assumptions. This is the level of process analysis where people become observers of themselves, asking, What is going on here? What are the patterns? We need this insight to understand the pattern. We change the way we make decisions and deepen understanding of our assumptions. Double-loop learning works with major fixes or changes, like redesigning an organizational function or structure. Are we doing the right things? Heres why this works-insights and patterns. Triple-Loop Learning Triple-loop learning involves principles. The learning goes beyond insight and patterns to context. The result creates a shift in understanding our context or point of view. We produce new commitments and ways of learning. This form of learning challenges us to understand how problems and solutions are related, even when separated widely by time and space. It also challenges us to understand how our previous actions created the conditions that led to our current problems. The relationship between organizational structure and behavior is fundamentally changed because the organization learns how to learn. The results of this learning includes enhancing ways to comprehend and change our purpose, developing better understanding of how to respond to our environment, and deepening our comprehension of why we chose to do things we do. How do we decide what is right? Heres why we want to be doing this-principles. LEARNING ORIENTATIONS CREATING A LEARNING ORGANISATION The very first thing needed to create a learning organization is effective leadership, not based on traditional hierarchy, but a mix of different people from all levels of the system to lead in different ways (Senge 1996). Secondly, there must be the realization that we all have inherent power to find solutions to the problems we are faced with, and that we can and will envision a future for our library system and forge ahead to create it. As Gephart and associates point out in Learning Organizations Come Alive, the culture is the glue that holds an organization together; a learning organizations culture is based on openness and trust, where employees are supported and rewarded for learning and innovating, and one that promotes experimentation, risk taking, and values the well-being of all employees (Gephart 1996,39). Here we will look at the three aspects of leadership that he identifies and link his discussion with some other writers on leadership. Overall, to create a culture and environment that will be the foundation for a learning organization, people must realize the beginning comes with a shift of mind from seeing ourselves as separate from the world to connected to the world (Senge 1996,37); from seeing ourselves as integral components in the workplace, rather than as separate and unimportant, robotic caricatures. Finally, one of the biggest challenges that must be overcome in any organization, is to identify and breakdown the ways people reason defensively. Until then, change can never be anything but a passing phase (Argyris 1991,106). Everyone must learn that the steps they use to define and solve problems can be a source of additional problems for the organization (Argyris 1991,100). References Single-Loop and Double-Loop Models in Research on Decision Making Author(s): Chris Argyris Source: Administrative Science Quarterly, Vol. 21, No. 3 (Sep., 1976), pp. 363-375 Published by: Johnson Graduate School of Management, Cornell University Stable URL: http://www.jstor.org/stable/2391848 Senge, P. (1998) The Practice of Innovation, Leader to Leader 9 http://pfdf.org/leaderbooks/l2l/summer98/senge.html Senge, P. et. al. (1994) The Fifth Discipline Fieldbook: Strategies and Tools for Building a Learning Organization Senge, P., Kleiner, A., Roberts, C., Ross, R., Roth, G. and Smith, B. (1999) The Dance of Change: The Challenges of Sustaining Momentum in Learning Organizations, New York: Doubleday/Currency). Senge, P., Cambron-McCabe, N. Lucas, T., Smith, B., Dutton, J. and Kleiner, A. (2000) Schools That Learn. A Fifth Discipline Fieldbook for Educators, Parents, and Everyone Who Cares About Education, New York: Doubleday/Currency

Thursday, September 19, 2019

Africa Essay -- essays research papers

Africa’s Resistance to the Portuguese The African history has been affected tremendously due to the influence of some European countries. Portugal, who probably had the greatest impact on the continent, was not shy in invading what they thought to be profitable circumstances. One of these areas that they profited from was the kingdom of Kongo. Kongo was a major Bantu-speaking kingdom astride the Congo River in west-central Africa, probably founded in the 14th century. It was governed by a king, the manikongo, whose economic power was based upon trade in ivory, hides, slaves, and a shell currency of western Africa. Within a few years after the Portuguese first encountered the kingdom in 1484, the sixth manikongo, Nzinga Mbemba, later Alfonso 1, converted to Christianity and extended di...

Wednesday, September 18, 2019

Essay --

Chapter 11; Question 4 1. What term denotes punishment by execution of someone officially judged to have committed a serious crime? The term that denotes the punishment by execution of someone officially judged to have committed a serious crime is capital punishment. Capital punishment is now defined as when a person commits a serious crime such as first-degree murder or for intentional killing of a police officer. In the past times the Western world used capital punishment for crimes such as witchcraft, rape, treason, kidnapping, murder, and many other acts that were deemed criminal offenses. Until recent decades, a lot of states in the United States of America practiced this form of punishment, but many of them have decided to use capital sentences aka death penalty for more serious crimes such as the case of State of NH vs. Michael Addison which involved the intentional killing of a Manchester N.H. police officer because of the controversial arguments that surrounds the ethical concerns with the death penalty sentence. Unfortunately, the judicial system has too many flaws and is not 100% fool proof to prevent the unjustifiable conviction and execution of a wrongly accused person so most convicted death row inmates spend many years in jail before the capital punishment sentence is carried out, to allow the proper legal rights of the accused to be carried out. The methods of execution that has been used through the many years are: âž ¢ Hanging- it is the method that a rope noose is put around the criminal’s neck and a platform they are standing on was removed from beneath them. âž ¢ Firing squad- a group of armed shooters lined up and the accused would stand, blindfolded and upon command the armed shooters would shoot several rounds ... ...Marriage'." CNS News.com. N.p., 9 Dec. 2013. Web. 17 Dec. 2013. . Catholic News Service. "The Catholic Church, Homosexuality and Gay Marriage." The Catholic Church, Homosexuality and Gay Marriage. N.p., n.d. Web. 16 Dec. 2013. . "First Degree Murder Law & Legal Definition." First Degree Murder Law & Legal Definition. N.p., n.d. Web. 17 Dec. 2013. . "Lethal Injection Procedures." Lethal Injection Procedures. N.p., n.d. Web. 16 Dec. 2013. . "Methods of Execution: Gas Chamber."Methods of Execution: Gas Chamber. N.p., n.d. Web. 17 Dec. 2013.

Tuesday, September 17, 2019

Buy Nothing Day Essay

Buy Nothing Day is an day of protest that was founded in Canada in 1992 where people are asked to purchase no goods as a way to attempt to increase awareness of excessive consumerism and its environmental and ethical consequences. Over the last 22 years it has been held annually in many nations and activist groups are continuing to try to convince more and more countries to pledge their participate. A Buy Nothing Day, although based in good motives, is extreme and should not be established in the United States because it may hurt the economy, and it is an ineffective way to promote anti-consumerist ideas. Asking American consumers to boycott all goods for a day could have negative effects on the country’s economy in many ways. Consumer spending almost single handedly carries the economy and makes up almost 70% of the gross domestic product. Not only does a Buy Nothing Day day have the potential to lower the GDP, but it would also cause instability in an already fragile economy that is still recovering from a recession. On a more personal level, a Buy Nothing day could seriously affect small businesses that depend on daily sales much more than large companies and workers in sales positions. If stores knew that they would not see many customers on a Buy Nothing Day, they may ask many workers to stay home which could be harmful to people who depend on work every day to pay for their living expenses. Even if a person was luck enough to still be called into work, many salespersons are paid low, basic wages and then paid commission for the number of sales they make that day to make u p for the low base earning. If no one comes into the store to buy goods that means the commission they would have earned is not available. In addition to negatively effecting the economy, a Buy Nothing Day is simply an ineffective strategy to promote anti-consumerist ideals. Asking consumers to completely abstain from purchasing goods is extreme and will likely not have a lasting effect on consumers’ buying habits. This is because this approach, although it may cause a day of less consumption, does nothing to educate people about why excessive consumerism is a problem. The more likely result is that it will simply hasten or delay the purchase of goods to  another day, and it may not have any effect at all on the purchase of necessities such as gas and groceries. In addition, it’s intense focus on the helping the environment is misleading, as one day a year will have almost no positive effect on the environment. Purchasing goods or not, most people will still have to use fossil fuels for transportation, and large industrial factories will continue to manufacture goods just as they would have before. If this campaign truly wants to create a lasting change in the way American’s purchase goods it should focus less on such extreme protesting and instead focus on year-long advertisements which better promote the reasons behind their campaign and on consumer education which would teach people how to make smart decisions when purchasing goods. Starting a Buy Nothing Day in the United States could cause instability, damage the already fragile economy and hurt individuals who depend on sales for their livelihoods. In addition, it is altogether a poorly constructed and will not change the way people consume goods because it does not adequately educate people about the cause, nor will its one-day strategy have the impact on the environment that the campaign wants to. Although it has good points about the extreme level of consumerism in the modern world and its negative effects, its extremist approach is not the way to go. Better promotion of the ideas behind a Buy Nothing Day and consumer education are the way to create a lasting effect on the way people purchase goods.

Monday, September 16, 2019

American Idol Case Study

Case Analysis Week 1 American Idol Case Mostafa Morshedi MKT 645 Qualitative Research in Customer Behavior California Intercontinental University Date: 11/18/2012 American Idol Case To perform a prefect marketing research, it is needed to identify and define the marketing research problem accurately and then develop a proper approach. The American Idol case is a challenging management decision and marketing research problem case, focusing on reasons why to conduct a study on the viewers and voters.In this case study, we review defining the marketing research problem and developing an approach process, including objective/theoretical framework, analytical model, research questions, hypothesizes and specification of information needed. Discussion According to the case, the management decision problem confronting Marcello and Litzenberger could be â€Å"Do we need to conduct a study investigating American Idol viewers? † (Malhotra, 2010, p. 780). The corresponding marketing resea rch problem would be â€Å"to determine who watched and voted in the 2009 season of American Idol and to determine how durable the show’s concept is† (Malhotra, 2010, p. 81). In fact, they should conduct the study in order to understand what are viewers’ and voters’ demographic based on age and sex. The study’s out coming is worthwhile for sponsors like Coca Cola and Ford who invested millions on the show, or who are interested to invest in future. Marketing research problem specific components are defined as: * What is the age demographic of American Idol’s watcher and voters? * How effective is the sponsors’ ads in the show? * How durable is the show’s concept? * How sponsors could motivate voters?The theoretical framework for the study is based on statistics, normal distribution function with 95% certainty (Malhotra, 2010, p. 781). As we are seeking for the age demographic of show viewers and voters, it is rational to use gr aphical model as it provides a visual picture of the relationship between variables (Malhotra, 2010, p. 51). Research questions and relevant hypothesizes to above explanations could be counted as: * Do teenagers vote more than adults do? * H1: teenagers are majority voters. * H2: Adults vote more than teenagers do. * Are women interested to vote more than men do? H3: women are tough fans and consequently vote more than men do. * H4: Men vote more than women do. * How many of show watchers vote? * H5: More than 90% of watchers vote. * H6: 70 to 90% of watchers vote. * H7: 75% to 70% of watchers vote. * H8: Less than 50% of watchers vote. * Do voters and viewer remember about the sponsor? * H9: all remember who the sponsor was. * H10: They hardly know about the sponsor. So based on above components, analytical model, research questions, and hypothesis we can determine the specification of information needed. * The age demographic of show viewers * The age demographic of voters Sex of show viewers * Sex of voters * Participation percentage in voting * Sponsorship effectiveness and durability * The main reasons of voting/not voting Conclusion Key concepts of a marketing research problem is first determine management decision problems, and then defining a broad marketing research problem, which in turn should be narrowed down by specific components. These specific components guide researchers to define an approach to the problem, find relevant objective/theoretical framework and choose among analytical models (verbal, graphical and mathematical) the one best match to the research objectives.Research questions are dragged out of marketing search problem approach. Hypothesizes are rephrased research questions that guide decision makers on the problem and will be approved or denied after the research is done. The main important concept is that all the process should be integrated and focused to provide the best accurate answer to the management decision and marketing research problems, specifically in huge projects. References Malhotra, N. K. (2010). Marketing Research: An Applied Orientation, 6/E. Boston, MA, USA: Prentice Hall.

Sunday, September 15, 2019

Planning and Strategic Management Essay

Management Chapter 10 Planning and Strategic Management Planning Overview Importance of Goals: Goals provide a sense of direction Goals focus our efforts Goals guide our plans and decisions Goals help us evaluate our progress The importance of planning at organizations The Hierarchy of Organization Plans Strategic Plans – Plans designed to meet an organization’s board goals. Operational Plans – Plans that contain details for carrying out, or implementing, those strategic plans in day-to-day activities. How Strategic and Operational Plans Differ Strategic Plans Operational Plans Time Horizons Strategic plans tend to look ahead several years or even decades For Operational plans, a year is often the relevant time period Scope It affects a wide range of organizational activities It has a narrow and more limited scope Degree of Detail Strategic plans are stated in terms that look simplistic and generic Operational plans are stated in relatively finer detail The Evolution of the Concept of Strategy Strategy: The board program for defining and achieving an organization’s objectives; the organization’s response to its environment over time. Strategic Management: The management process that involves an organization’s engaging in strategic planning and then acting on those plans. for attaining objectives The process of seeking key ideas ( rather than routinely implementing existing policy); and How strategy is formulated, not Just what the strategy turns out to be The Strategic Management Approach Dan Schendel and Charles Hofer have suggested four key aspects of Strategic Management- 1) Goal Setting 2) Strategy Formulation 3) Administration 4) Strategic Control The Strategic Management process Strategic Planning – includes both the goal-setting and strategy-formulation processes. Strategy Implementation – involves with action based decisions. Levels of Strategy: Some key Distinctions Corporate-level strategy: Strategy formulated by top management to oversee the interests and operations of ultiline corporations. Business-unit strategy: Strategy formulated to meet the goals of a particular business; also called line-of- business strategy. Functional-level strategy: Strategy formulated by a specific functional area in an effort to carry out business- unit strategy. The Corporate Portfolio Approach Portfolio framework – An approach to corporate-level strategy advocated by the Boston Consulting group; also known as the BCG matrix.

Saturday, September 14, 2019

Mcdonald vs Burger King Compare and Contrast Essay

Outline I) Intro/Hook Thesis Statement: Although McDonald’s and Burger King are similar; they have evident differences in their advertising models, food and their commitment with the community. II) *Topic sentence 1: McDonald’s and Burger King invest a lot of money in their advertisements. A) Evidence #1: Golden arches, Ronald McDonald, Big Mac, extra cheese and the guy who promote Burger King. III) *Topic sentence 2: Their food seems to be the same, but it isn’t. A) Evidence #1: McDonald’s hamburger weighs less than Burger King’s.B) Evidence #2: Burger King’s beef are 100% pure and they flame-boils their burgers, while McDonald’s fries their beef. C) Evidence #3: McDonald’s cost slightly less than Burger King. IV) *Topic Sentence 3: Their commitment with the community is different. A) Evidence #1: McDonald’s has House Charities and they give away millions of dollars in scholarship, while Burger King’s commitment i s to provide good service and products to their clients. V) Conclusion McDonald's vs. Burger King â€Å"We see things not as they are, but as we are conditioned to see them† –Gandalf. Far from what we imagined, McDonald's and Burger King have huge differences.Most people perceive them just as the same fast food restaurant with different names. For this reason, â€Å"they create debates on which one of them is the superior restaurant† (Jeffrey’s blog, 2012, BK vs MC). Although, they have similarities, their differences become undeniable when we analyze deeply their advertising models, their food and their commitment to the community. An advertising model is the set of techniques that the companies use to call public attention to their products. Two of the best fast food restaurants in the world, McDonald’s and Burger King, invest a lot of money n their advertisements. Despite this, it’s quite remarkable that McDonald’s is smarter. When ever we hear golden arches, Ronald McDonald, Big Mac, or extra cheese we think about McDonalds. In contrast, what do we think about when we hear Burger King? Maybe some guy that appear in their commercials, but besides that, there is nothing startling about the advertising their use. Their food seems to be the same, but it isn’t. On one hand, McDonald’s hamburger weighs less and has only 9g of total fat, while Burger King’s hamburger has 12g and they have a saltier taste.On the other hand, Burger King’s beef are 100% pure and they flame-boils their burgers, while McDonald’s fries their beef. That’s why they taste different. Concern at cost, McDonald’s simple burger is lower at $0. 89 while Burger King’s has their simple burger at $0. 99. Their commitment to the community is also different. McDonald’s has House Charities since 1974, where they help thousands of parents stay by their sick children’s side. In additio n, they give away millions of dollars in scholarship to help people who can’t afford college. On the other hand, Burger King’s has some scholar program, which help poor families.However, their strong commitment is to provide good service and products to their clients and to make every Burger King restaurant a place where people love to go everyday. Even though McDonald’s and Burger King are really similar, they are also really different. They both try to have good advertising but McDonald’s is, most of the time, ahead. Their food seems to have the same condiments, but again, they are far away to be the same. They appear as the two most famous fast food restaurants around the world, but each one of them has their own techniques and secrets to be outstanding.McDonald’s, besides the service they offer, they help the community, and Burger King’s restaurants are commitment to be the best with their clients. Yes, they are fast food chains, they ar e famous, they are similar; but they also have huge differences in their food, advertisement, and the way they help the community. Reference: – (Jeffrey’s blog, 2012, BK vs MC). http://sites. cdnis. edu. hk/students/043135/2012/01/24/burger-king-vs-mcdonalds/ – http://www. burgerlad. com/2013/01/mcdonalds-limited-edition-big-tasty_4872. html – http://www. thesaleslion. com/reasons-mcdonalds-crushes-beats-burger-king-year/

Friday, September 13, 2019

Case Study Analysis Bush Boake Allen Marketing Essay

Case Study Analysis Bush Boake Allen Marketing Essay 1. Introduction Since its foundation in 1966 by merging of three British companies, Bush Boake Allen had been outstanding and known to one of the leading firms in the flavor and fragrance industry. The firm seemed to be in a stable industry as such food and fragrance are closely associated with people’s daily life. However, BBA had to be coped with the cost pressure and high risk given the traditional business model. On the top of that, by using new technologies, some firms can analyze production cost information of flavor and even chemical components as well. That made flavor prices in the market might be forced to decrease (Stefan Thomke and Ashok Nimgade 2000). For the above reasons, Julian Boyden, CEO of BBA is about to begin new business strategy called â€Å"Mercury Project† which allows their customers to actually participate in flavor development process via online-based application software. In a setting where customers can handle flavor, there may be some adv antages in terms of time-saving and high-rate acceptance by customers themselves who manipulate flavors in the development. This may bring about substantial change not only to the firm’s business model but also relationship between the firm and customers. The thing is, however, not absolutely optimistic to the firm, as senior managers of BBA countered the new approach may be somewhat challenging and controversial on following several issues. First and foremost of all, the firm may be concerned with how much they give customers authority to control flavor development. This is related to where the flavor sample product machine should be located. For example, if customers get an authority to control the flavor development in their sites, they had to pay half million dollar for machine which may be not very affordable to the customers. Secondly, even if the customer has an opportunity to manipulate the new machine, they could be frustrated if they have difficulty operating the ma chine and software or fail to get the flavor they initially wanted. What is more, even customers who take advantage of the new software might underestimate flavorsists of the firm. Thirdly, the role of marketing is doubtful in the new business model. Traditionally, division of marketing had significant impact on the firm performance due to the fact that marketers closely had relationship with their customers from the flavor development to delivery of finished sample. On the other hand, as customers can directly involve in flavor development, task of marketers may decline. This paper will begin with the overview of the company and market environment of that time period. Then the paper will continue with the analysis of the business strategy and present managerial recommendations for Bush Boake Allen in the end. 2. Company Overview: Bush Boake Allen Since its foundation in 1966 by merging of three British companies including Bush Ltd., A. Boake Roberts Ltd., and Stafford Allen Ltd5), Bush Boake Allen, Inc had provided flavors and fragrances to the consumer products companies for use in foods, beverages, soaps and detergents, and so on. The BBA’s key global strategy had been â€Å"maintaining a decentralized structure. They tried to give empowerment to regional subsidiaries to locally make their decision (Stefan Thomke and Ashok Nimgade 2000). Especially in 1980’s, through a â€Å"Gaps in Maps† strategy, they started to launched global sites to accomplish consistent supply to customers and meet the local preferences. By 2000, BBA had 6 major sites Montvale, Dallas, London, Chennai, Singapore, and 13 minor sites worldwide.

Multimedia and design Assignment Example | Topics and Well Written Essays - 250 words

Multimedia and design - Assignment Example ny professional in any occupation where they build or construct something from scratch, there are right ways and wrong ways to utilize elements of design and they are not all based on what someone simply likes. When looking at the fashion, someone can design an outfit but is it functional? Badly designed outfits will never be worn and are not ever going to be purchased if they do not include good elements of design. The same with a car. If the design of a car is based on what the creator likes, then no one would buy it. Design in these fields has to do with marketability and bad design can crush profitability. Interior design is another field where elements can create different moods. If a person mixes and matches fabrics and textiles all due to their own likes, the space can be overwhelming to be in or even evoke a negative response. When it comes to design in technology, while taking a look at a website, if a design of a website is not functional, no users will come visit the site. If it is cluttered and is chaotic, it is not appealing to anyone. When looking at multimedia as a personal experience, it is all about using design of pictures, interactive features and other elements to create ways for people to get something out of their experience. When thinking of blogs, websites or social networking sites, it is a goal of multimedia techniques to intrigue others. It is a form of marketability and branding to enhance a persons visit. It is not necessarily for ones own use but for productivity and profitability increases in ways to increase others personal experience. It creates more of a personalized

Thursday, September 12, 2019

Sexual Harassment in the Work Place Research Paper - 1

Sexual Harassment in the Work Place - Research Paper Example sue caused by different factors arising from the socialization, power, and politics among others, making the harassment sometimes to be inevitable in many occupations. Sexual harassment used to be a key concern in government and state related jobs, but due to the rising cases and poor measures to control the issue in both public and private employment areas, the governments had to step in to protect the victims and enforce order. Sometimes co-workers, managers, and employers find themselves in compromising and violating situations, because they overlook the harassment and its impact in the workplace. There are many things, unwanted pressure, looks, feel, touches, verbal, non-verbal, and physical communications and actions that would create sexual harassment, either intentionally or unintentionally that would provide the legal definition of a harassing conduct. Title VII is enforced by the (EEOC) Equal Opportunity Employment Commission that has built up large body of regulations and g uidelines, which avails the legal meaning of harassing behavior and lays out the standard to be followed by courts, and enforcement agencies in handling sexual harassment charges (ICRC factsheet 1). As part of a social context in working environments, employees get to socialize better, which could have either a positive or negative effect in the long run. As a benefit, it improves team work and support in job performance; sometimes the relationships go beyond the work domains and employees could get married and have families of their own, since there are few or no such laws that restrict them. Similarly, the law works to ensure integrity and morality in workplace, such that if the employee is not willing to engage in a sexual relationship with his or her co workers, employers, supervisors,... This paper approves that business sexual harassment training programs and establishment of complaints committees, possibly outside the line of management, with gender equality and expertise in leading and counseling people is required by law in the business level, Businesses consist many rules and regulations that govern the employees and management. However, some of those rules are optional and may not be strictly enforced; instead the federal, state, labor, and international laws require establishment of some policies such as in sexual harassment, which should enforce and comply with the requirements of the law. Businesses regardless of the size have to option but to deploy such policies, because they are part of the necessary policies regulated by the law. This report makes a conclusion that workplace sexual harassment affects individuals psychologically, and their behavior in their social lives and in the workplace. It is a problem that puts indirect pressure on the people to terminate their jobs, due to the hostile working environment and when control measures are lacking. In some cases, it causes trauma to individuals making them unable to perform their roles, due to emotional and physical stress. It also demoralizes the workers involved and may cost them their self esteem. The international, federal, state, and business rules and regulations put measures to define, prohibit, and control sexual harassment among other discriminations, which must be enforced through set procedures and institutions in filing complaints and seeking protection. The employer and their employees also have responsibilities in administering and complying with the laws in solving sexual harassment disputes.

Wednesday, September 11, 2019

Disabled Entrepreneurs Literature review Example | Topics and Well Written Essays - 3000 words

Disabled Entrepreneurs - Literature review Example A disability is a condition or function judged to be significantly impaired, relative to the usual standard of an individual or group. The term is used to refer to individual functioning, including physical impairment, sensory impairment, cognitive impairment, intellectual impairment, mental illness and various types of chronic disease. Furthermore, people with disability make up 20% of the total population of the poorest people in the world. There are 10 million individuals with disability in the UK alone, which comprises 18% of the total population of employed individuals (Wood et al., 2012: p. 146). A large number of disabled individuals have by now built opportunities or prospects for themselves by means of entrepreneurship. The advantages of entrepreneurship for these disabled people rest largely in their self-reliance and in the chance to engage in their own business decision-making, the capability to make their own timetable and pace, and the prevention of stereotypes and discrimination that are at times observed in the practise of recruitment, leading to underemployment or unemployment. Decreased transportation difficulties offered by home-based businesses are important advantages too. Disabled individuals usually face challenges, difficulties, or barriers when trying to embark on entrepreneurial projects, particularly in obtaining the resources or capital required for business start-ups, for they do not have the adequate resources or credit to fall back on as indemnity for a loan (Parker, 2009). This paper discusses the barriers confronted by disabled entrepreneurs and the possible measures that can be implemented to help disabled people become successful entrepreneurs and gain self-sufficiency and confidence. In certain instances, they may not possess the assets, knowledge, or information needed to formulate a business plan, a successful path to economic self-reliance

Tuesday, September 10, 2019

Initiating an Assessment Plan for a Research University Assignment

Initiating an Assessment Plan for a Research University - Assignment Example The first stage of assessment is to identify the objectives of program and select the goals of higher education that need to be assessed. Several universities which have expressed concerns on the decline of learning in higher education needs to carry out assessment programs to find the areas which have lacked vision and efforts in the educational process and also to undertake appropriate strategies in order to address the fall of higher education and learning among the students. This requires the academic institution to carry out collection programs for gathering useful data and information that are to be further analyzed. The analysis of the prevailing learning techniques, response of the students, level of interest of the students, efficiency of the teachers and eventual success of learning process would provide useful insight to the academic leaders to bring about necessary changes in the learning process in higher education (OIRA, 2013, p.1). ... The decline in higher education could be attributed to a large number of factors which include the rising cost of higher education, the increase in debt of the students as well as the universities and the decline in the quality of pedagogy and the students who participate in the higher education process of learning in various subjects. The steep rise in the tuition fees of the universities have resulted in the reduction of interest level among the students due to lack of affordability. The cost of the universities for each and every student has risen five times as compared to the inflation rate since 1983. This has resulted in the shortage of funds in the university for which the universities has to incur debt as well as raise the fees for every student. The raised university fees have resulted in the decrease of affordability of the students for pursuing higher education. Almost 66% of the graduates undertake education loans for pursuing a degree for higher education. The decline in affordability among the students to pursue higher education has resulted into huge section of bright quality students to move away from the learning process of higher education and instead pursue employment opportunities. The fall in the ability of the universities to pay salaries to the high quality professors and teaching faculties as fresh recruits for future have resulted in the fall of pedagogy of the universities. Apart from this, there have been several issues of lack of accountability and not participating in the learning process in accordance to the roles and responsibilities. These are the several areas of concern that have been faced by the universities which resulted in

Monday, September 9, 2019

The Social Life of Borders Essay Example | Topics and Well Written Essays - 500 words

The Social Life of Borders - Essay Example Miller says concerning borderlands, â€Å"Borderlands are spaces that defy categories and paradigms, that â€Å"don’t fit,† and that therefore reveal the criteria that determine fittedness spaces whose very existence is simultaneously denied and demanded by the socially powerful. Borderlands are targets of repression and zones of militarization, as can be seen by the recent deployment of weaponry and guardsmen along the U.S.-Mexico border. Borderlands are marginalized yet strategic†¦Ã¢â‚¬  (Bibler-Coutin 171). As such, it is not difficult for the reader to understand the unique nature of the borderlands as something that neither typifies the culture and identity of either region that adjoins such an area. A secondary concept that the author chooses to discuss is that of â€Å"nonexistence†. This is a unique term that encompasses elements of the illegal and undocumented nature that many immigrants have to live with on a daily basis (Lee 56). As such, the author goes in depth to discuss what such a â€Å"nonexistence† feels like with respect to everyday life and the obtainment of goods and services that so many native residents take for granted. All in all, the concepts of borderlands combined with the space of nonexistence helps to present the reader with the unique externalities that are oftentimes unspoken but help to define the experience of untold thousands of individuals throughout the world. The image that has been chosen is taken from Google Earth images of the US-Mexico Border. This particular image is taken from the US side of the border looking in to the Mexican side. What this author found indicative and unique regarding this image is the fact that the â€Å"borderland† in this image is demarcated by a military-style fence that brings to mind images of the front lines of a battlefield. Whereas tall fences exist in many regions of the world to keep out would be à ©migrà ©s, this particular fence is interesting in the fact that it has an

Sunday, September 8, 2019

The Al Qaida Transnational Terrorist Network Essay

The Al Qaida Transnational Terrorist Network - Essay Example But the larger issue revolved around the nature of terrorism itself and its emerging modus operandi. Whether the 11 September attacks in the United States were the delayed manifestation of Oplan Bojinka, as some believe, or whether they were an isolated plan, it is clear that terrorism--and particularly that form of terrorism practiced by al Qaeda --has fundamentally changed. Since the terrorist attacks of September 11, 2001, the United States has achieved significant successes in its war on terrorism. Removing the Taliban government in Afghanistan, thereby eliminating al Qaeda's sanctuary and training camps, has broken an important link in the process that once provided al Qaeda's leadership with a continuing flow of recruits. Toppling the Taliban also demonstrated American resolve and international support, and it underscored the considerable risk run by governments that provide assistance to terrorists. From the summary in above, I would like to gradually come down to particular research of Al Qaeda terrorist organization. I will first discuss the historical and statistical facts about organization, than make the insights into organizational motivations and strategy finally will come out with conclusions as for the possible ways of dealing with future possible attacks. History Al Qaeda was a product of the struggle to reject the Soviet Union from Afghanistan. Portrayed as a holy war, that campaign brought together volunteers and financial contributors from throughout the Islamic world. Muslims from Algeria, Egypt, Saudi Arabia, Southeast Asia, and beyond fought side by side, forging relationships and creating a cadre of veterans who shared a powerful life experience, a more global view, and a heady sense of confidence underscored by the Soviet Union's ultimate withdrawal and subsequent collapse, for which they assumed credit. Instead of being welcomed home as heroes, however, the returning veterans of the Afghan campaign were watched by suspicious regimes who worried that the religious fervour of the fighters posed a political threat. Isolated at home, they became ready recruits for new campaigns. There were ample reasons and opportunities to continue the fight: the Gulf War and the consequent arrival of American troops in Saudi Arabia; the continued repression of Islamic challenges to local regimes; armed struggles in Algeria, Egypt, the newly independent Muslim republics of the former Soviet Union, Kashmir, the Philippines, and Bosnia; the forces of globalization that seemed threatening to all local cultures; and the continuing civil war in Afghanistan. Organizational survival, the natural desire to continue in meaningful activity, and the rewards of status and an inflated self-image contributed powerful incentives to continue the fight. The subsequent victories of a like-minded Taliban guaranteed safe haven for the militants and their training camps, which graduated thousands of additional volunteers (Cullison, Higgins, 2001). What Osama bin Laden and his associates contributed to this potent but unfocused force was a sense of vision, mission, and strategy that combined 20th century theory of a unified Islamic polity with restoration of the Islamic Caliphate that, at its height, stretched

Saturday, September 7, 2019

Management and Graphical Front Ends Assignment Example | Topics and Well Written Essays - 2500 words

Management and Graphical Front Ends - Assignment Example MySQL is officially pronounced /maÉ ªÃ‹Å'É›skju:ˈɛl/ ("My S-Q-L"),[2] but is often also pronounced /maÉ ªÃ‹Ë†si:kwÉ™l/ ("My Sequel"). It is named for original developer Michael Widenius daughter My. The SQL phrase stands for Structured Query Language.[3] The MySQL development project has made its source code available under the terms of the GNU General Public License, as well as under a variety of proprietary agreements. MySQL was owned and sponsored by a single for-profit firm, the Swedish company MySQL AB, now owned by Oracle Corporation.[4] Members of the MySQL community have created several forks (variations) such as Drizzle, OurDelta, Percona Server, and MariaDB. All of these forks were in progress before the Oracle acquisition; Drizzle was announced eight months before the Sun acquisition. Free-software projects that require a full-featured database management system often use MySQL. Such projects include (for example) WordPress, phpBB, Drupal and other software built on the LAMP software stack. MySQL is also used in many high-profile, large-scale World Wide Web products, including Wikipedia, Google[5] and Facebook.[6] MySQL is a popular choice of database for use in web applications, and is a central component of the widely used LAMP web application software stack  Ã¢â‚¬â€ the LAMP is an acronym for "Linux, Apache, MySQL, PHP". Its popularity is closely tied to the popularity of PHP. MySQL is used in some of the most frequently visited websites on the Internet, including Flickr, Facebook, Google  Ã¢â‚¬â€ though not for searches, Nokia.com and YouTube. MySQL works on many different system platforms, including AIX, BSD, FreeBSD, HP-UX, i5/OS, Linux, Mac OS X, NetBSD, Novell NetWare, OpenBSD, OpenSolaris, eComStation, OS/2 Warp, QNX, IRIX, Solaris, Symbian, SunOS, SCO OpenServer, SCO UnixWare, Sanos, Tru64 and Microsoft Windows.  

Friday, September 6, 2019

Beautiful!.... Great God! Essay Example for Free

Beautiful!. Great God! Essay What do chapters 2, 3,4,5,9 and 10 reveal about Mary Shelleys attitude to knowledge? Mary Shelley is a gothic writer, who (through this novel Frankenstein) has been able to create a hybrid form of gothic literature, a gothic/horror genre which allows Shelley to convey a more realistic terror, one that resides within the psyche instead of a form outside , an example would be Ghosts. Her knowledge on different subjects allows her to create a realistic world in the novel, possibly even criticising her own husband Percy Shelley, who searched for knowledge and in doing so became egotistical and self obsessed like a true romantic just like Frankenstein and other romantic characters like him. Shelley was always surrounded by intelligent people, which were mainly her father and his inner circle that also included her husband. These people encouraged Shelley to educate herself and develop her own opinions. Shelley found the gothic genre a perfect place in which she could air her thoughts, such as a critical view of certain powers in her society and imply things about the industrial revolution through subtle remarks in the novel. The novel itself was a product of Shelley taking up a challenge to write a ghost story, which was her chance to give a dire warning to society (through the didactic tone throughout the novel) that, embraces experimenting and questing for the unknown which so much part of her culture but at the same time playing on the fears of the middles classes lack of knowledge as at the same time graves were being dug up and bodies used which made this tomb quite fearful to its readers and intriguing as Shelley brings many ethical issues on the subject of science. Throughout the novel Shelley has much to say on the concept of knowledge as she includes many remarks like when Shelleys warning us, readers of the danger of knowledge when it is used to obtain power. What had been the study and desire of the wisest man since the creation of the world was now within my grasp. The use of the word grasp an creation suggests that Frankenstein wants to become mnipotent and play god. Remarks such as that show Shelleys critical views on her society, on issues such as science, how a man can become obsessed with something dangerous to either themselves or others , probably both , this also could be seen as another reference to her own husbands obsession with knowledge and this warning is actually to him. Frankenstein experience in university is very important in the text as that is where he forms his strong friendship with Henry clerval, guided and ridiculed by his professors and the actual place where he created the creature. Frankensteins first experience of university were feelings of isolation and melancholy which worsened through the ridicule of his work by his first professor Kriempe who tells him not to waste his time on the trash that he has read up until now. This maybe Shelley suggesting that ignorance isnt a bad thing because once Frankenstein starts delving into new areas which allows him to create the creature which causes so many problems. Shelley shows us this ignorance is bliss, it is folly to be wise in the paragraph in which Frankenstein realises that the creature he has created is not beautiful as he intended but a monster in his eyes I had selected his features as beautiful, Beautiful!. Great God! This quote is Frankenstein in hindsight looking at his creature and realising his folly. The use of the words Great God! show that his wisdom was of no use. Even though his professors didnt really guide Frankenstein in the right way, Frankenstein still follows there wisdom to folly, in the creation of the monster.

Thursday, September 5, 2019

Decision Tree for Prognostic Classification

Decision Tree for Prognostic Classification Decision Tree for Prognostic Classification of Multivariate Survival Data and Competing Risks 1. Introduction Decision tree (DT) is one way to represent rules underlying data. It is the most popular tool for exploring complex data structures. Besides that it has become one of the most flexible, intuitive and powerful data analytic tools for determining distinct prognostic subgroups with similar outcome within each subgroup but different outcomes between the subgroups (i.e., prognostic grouping of patients). It is hierarchical, sequential classification structures that recursively partition the set of observations. Prognostic groups are important in assessing disease heterogeneity and for design and stratification of future clinical trials. Because patterns of medical treatment are changing so rapidly, it is important that the results of the present analysis be applicable to contemporary patients. Due to their mathematical simplicity, linear regression for continuous data, logistic regression for binary data, proportional hazard regression for censored survival data, marginal and frailty regression for multivariate survival data, and proportional subdistribution hazard regression for competing risks data are among the most commonly used statistical methods. These parametric and semiparametric regression methods, however, may not lead to faithful data descriptions when the underlying assumptions are not satisfied. Sometimes, model interpretation can be problematic in the presence of high-order interactions among predictors. DT has evolved to relax or remove the restrictive assumptions. In many cases, DT is used to explore data structures and to derive parsimonious models. DT is selected to analyze the data rather than the traditional regression analysis for several reasons. Discovery of interactions is difficult using traditional regression, because the interactions must be specified a priori. In contrast, DT automatically detects important interactions. Furthermore, unlike traditional regression analysis, DT is useful in uncovering variables that may be largely operative within a specific patient subgroup but may have minimal effect or none in other patient subgroups. Also, DT provides a superior means for prognostic classification. Rather than fitting a model to the data, DT sequentially divides the patient group into two subgroups based on prognostic factor values (e.g., tumor size The landmark work of DT in statistical community is the Classification and Regression Trees (CART) methodology of Breiman et al. (1984). A different approach was C4.5 proposed by Quinlan (1992). Original DT method was used in classification and regression for categorical and continuous response variable, respectively. In a clinical setting, however, the outcome of primary interest is often duration of survival, time to event, or some other incomplete (that is, censored) outcome. Therefore, several authors have developed extensions of original DT in the setting of censored survival data (Banerjee Noone, 2008). In science and technology, interest often lies in studying processes which generate events repeatedly over time. Such processes are referred to as recurrent event processes and the data they provide are called recurrent event data which includes in multivariate survival data. Such data arise frequently in medical studies, where information is often available on many individuals, each of whom may experience transient clinical events repeatedly over a period of observation. Examples include the occurrence of asthma attacks in respirology trials, epileptic seizures in neurology studies, and fractures in osteoporosis studies. In business, examples include the filing of warranty claims on automobiles, or insurance claims for policy holders. Since multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events, then further extensions of DT are developed for such kind of data. In some studies, patients may be simultaneously exposed to several events, each competing for their mortality or morbidity. For example, suppose that a group of patients diagnosed with heart disease is followed in order to observe a myocardial infarction (MI). If by the end of the study each patient was either observed to have MI or was alive and well, then the usual survival techniques can be applied. In real life, however, some patients may die from other causes before experiencing an MI. This is a competing risks situation because death from other causes prohibits the occurrence of MI. MI is considered the event of interest, while death from other causes is considered a competing risk. The group of patients dead of other causes cannot be considered censored, since their observations are not incomplete. The extension of DT can also be employed for competing risks survival time data. These extensions can make one apply the technique to clinical trial data to aid in the development of prognostic classifications for chronic diseases. This chapter will cover DT for multivariate and competing risks survival time data as well as their application in the development of medical prognosis. Two kinds of multivariate survival time regression model, i.e. marginal and frailty regression model, have their own DT extensions. Whereas, the extension of DT for competing risks has two types of tree. First, the â€Å"single event† DT is developed based on splitting function using one event only. Second, the â€Å"composite events† tree which use all the events jointly. 2. Decision Tree A DT is a tree-like structure used for classification, decision theory, clustering, and prediction functions. It depicts rules for dividing data into groups based on the regularities in the data. A DT can be used for categorical and continuous response variables. When the response variables are continuous, the DT is often referred to as a regression tree. If the response variables are categorical, it is called a classification tree. However, the same concepts apply to both types of trees. DTs are widely used in computer science for data structures, in medical sciences for diagnosis, in botany for classification, in psychology for decision theory, and in economic analysis for evaluating investment alternatives. DTs learn from data and generate models containing explicit rule-like relationships among the variables. DT algorithms begin with the entire set of data, split the data into two or more subsets by testing the value of a predictor variable, and then repeatedly split each subset into finer subsets until the split size reaches an appropriate level. The entire modeling process can be illustrated in a tree-like structure. A DT model consists of two parts: creating the tree and applying the tree to the data. To achieve this, DTs use several different algorithms. The most popular algorithm in the statistical community is Classification and Regression Trees (CART) (Breiman et al., 1984). This algorithm helps DTs gain credibility and acceptance in the statistics community. It creates binary splits on nominal or interval predictor variables for a nominal, ordinal, or interval response. The most widely-used algorithms by computer scientists are ID3, C4.5, and C5.0 (Quinlan, 1993). The first version of C4.5 and C5.0 were limited to categorical predictors; however, the most recent versions are similar to CART. Other algorithms include Chi-Square Automatic Interaction Detection (CHAID) for categorical response (Kass, 1980), CLS, AID, TREEDISC, Angoss KnowledgeSEEKER, CRUISE, GUIDE and QUEST (Loh, 2008). These algorithms use different approaches for splitting variables. CART, CRUISE, GUIDE and QUEST use the sta tistical approach, while CLS, ID3, and C4.5 use an approach in which the number of branches off an internal node is equal to the number of possible categories. Another common approach, used by AID, CHAID, and TREEDISC, is the one in which the number of nodes on an internal node varies from two to the maximum number of possible categories. Angoss KnowledgeSEEKER uses a combination of these approaches. Each algorithm employs different mathematical processes to determine how to group and rank variables. Let us illustrate the DT method in a simplified example of credit evaluation. Suppose a credit card issuer wants to develop a model that can be used for evaluating potential candidates based on its historical customer data. The companys main concern is the default of payment by a cardholder. Therefore, the model should be able to help the company classify a candidate as a possible defaulter or not. The database may contain millions of records and hundreds of fields. A fragment of such a database is shown in Table 1. The input variables include income, age, education, occupation, and many others, determined by some quantitative or qualitative methods. The model building process is illustrated in the tree structure in 1. The DT algorithm first selects a variable, income, to split the dataset into two subsets. This variable, and also the splitting value of $31,000, is selected by a splitting criterion of the algorithm. There exist many splitting criteria (Mingers, 1989). The basic principle of these criteria is that they all attempt to divide the data into clusters such that variations within each cluster are minimized and variations between the clusters are maximized. The follow- Name Age Income Education Occupation Default Andrew 42 45600 College Manager No Allison 26 29000 High School Self Owned Yes Sabrina 58 36800 High School Clerk No Andy 35 37300 College Engineer No †¦ Table 1. Partial records and fields of a database table for credit evaluation up splits are similar to the first one. The process continues until an appropriate tree size is reached. 1 shows a segment of the DT. Based on this tree model, a candidate with income at least $31,000 and at least college degree is unlikely to default the payment; but a self-employed candidate whose income is less than $31,000 and age is less than 28 is more likely to default. We begin with a discussion of the general structure of a popular DT algorithm in statistical community, i.e. CART model. A CART model describes the conditional distribution of y given X, where y is the response variable and X is a set of predictor variables (X = (X1,X2,†¦,Xp)). This model has two main components: a tree T with b terminal nodes, and a parameter Q = (q1,q2,†¦, qb) ÃÅ' Rk which associates the parameter values qm, with the mth terminal node. Thus a tree model is fully specified by the pair (T, Q). If X lies in the region corresponding to the mth terminal node then y|X has the distribution f(y|qm), where we use f to represent a conditional distribution indexed by qm. The model is called a regression tree or a classification tree according to whether the response y is quantitative or qualitative, respectively. 2.1 Splitting a tree The DT T subdivides the predictor variable space as follows. Each internal node has an associated splitting rule which uses a predictor to assign observations to either its left or right child node. The internal nodes are thus partitioned into two subsequent nodes using the splitting rule. For quantitative predictors, the splitting rule is based on a split rule c, and assigns observations for which {xi For a regression tree, conventional algorithm models the response in each region Rm as a constant qm. Thus the overall tree model can be expressed as (Hastie et al., 2001): (1) where Rm, m = 1, 2,†¦,b consist of a partition of the predictors space, and therefore representing the space of b terminal nodes. If we adopt the method of minimizing the sum of squares as our criterion to characterize the best split, it is easy to see that the best , is just the average of yi in region Rm: (2) where Nm is the number of observations falling in node m. The residual sum of squares is (3) which will serve as an impurity measure for regression trees. If the response is a factor taking outcomes 1,2, K, the impurity measure Qm(T), defined in (3) is not suitable. Instead, we represent a region Rm with Nm observations with (4) which is the proportion of class k(k ÃŽ {1, 2,†¦,K}) observations in node m. We classify the observations in node m to a class , the majority class in node m. Different measures Qm(T) of node impurity include the following (Hastie et al., 2001): Misclassification error: Gini index: Cross-entropy or deviance: (5) For binary outcomes, if p is the proportion of the second class, these three measures are 1 max(p, 1 p), 2p(1 p) and -p log p (1 p) log(1 p), respectively. All three definitions of impurity are concave, having minimums at p = 0 and p = 1 and a maximum at p = 0.5. Entropy and the Gini index are the most common, and generally give very similar results except when there are two response categories. 2.2 Pruning a tree To be consistent with conventional notations, lets define the impurity of a node h as I(h) ((3) for a regression tree, and any one in (5) for a classification tree). We then choose the split with maximal impurity reduction (6) where hL and hR are the left and right children nodes of h and p(h) is proportion of sample fall in node h. How large should we grow the tree then? Clearly a very large tree might overfit the data, while a small tree may not be able to capture the important structure. Tree size is a tuning parameter governing the models complexity, and the optimal tree size should be adaptively chosen from the data. One approach would be to continue the splitting procedures until the decrease on impurity due to the split exceeds some threshold. This strategy is too short-sighted, however, since a seeming worthless split might lead to a very good split below it. The preferred strategy is to grow a large tree T0, stopping the splitting process when some minimum number of observations in a terminal node (say 10) is reached. Then this large tree is pruned using pruning algorithm, such as cost-complexity or split complexity pruning algorithm. To prune large tree T0 by using cost-complexity algorithm, we define a subtree T T0 to be any tree that can be obtained by pruning T0, and define to be the set of terminal nodes of T. That is, collapsing any number of its terminal nodes. As before, we index terminal nodes by m, with node m representing region Rm. Let denotes the number of terminal nodes in T (= b). We use instead of b following the conventional notation and define the risk of trees and define cost of tree as Regression tree: , Classification tree: , (7) where r(h) measures the impurity of node h in a classification tree (can be any one in (5)). We define the cost complexity criterion (Breiman et al., 1984) (8) where a(> 0) is the complexity parameter. The idea is, for each a, find the subtree Ta T0 to minimize Ra(T). The tuning parameter a > 0 governs the tradeoff between tree size and its goodness of fit to the data (Hastie et al., 2001). Large values of a result in smaller tree Ta and conversely for smaller values of a. As the notation suggests, with a = 0 the solution is the full tree T0. To find Ta we use weakest link pruning: we successively collapse the internal node that produces the smallest per-node increase in R(T), and continue until we produce the single-node (root) tree. This gives a (finite) sequence of subtrees, and one can show this sequence must contains Ta. See Brieman et al. (1984) and Ripley (1996) for details. Estimation of a () is achieved by five- or ten-fold cross-validation. Our final tree is then denoted as . It follows that, in CART and related algorithms, classification and regression trees are produced from data in two stages. In the first stage, a large initial tree is produced by splitting one node at a time in an iterative, greedy fashion. In the second stage, a small subtree of the initial tree is selected, using the same data set. Whereas the splitting procedure proceeds in a top-down fashion, the second stage, known as pruning, proceeds from the bottom-up by successively removing nodes from the initial tree. Theorem 1 (Brieman et al., 1984, Section 3.3) For any value of the complexity parameter a, there is a unique smallest subtree of T0 that minimizes the cost-complexity. Theorem 2 (Zhang Singer, 1999, Section 4.2) If a2 > al, the optimal sub-tree corresponding to a2 is a subtree of the optimal subtree corresponding to al. More general, suppose we end up with m thresholds, 0 (9) where means that is a subtree of . These are called nested optimal subtrees. 3. Decision Tree for Censored Survival Data Survival analysis is the phrase used to describe the analysis of data that correspond to the time from a well-defined time origin until the occurrence of some particular events or end-points. It is important to state what the event is and when the period of observation starts and finish. In medical research, the time origin will often correspond to the recruitment of an individual into an experimental study, and the end-point is the death of the patient or the occurrence of some adverse events. Survival data are rarely normally distributed, but are skewed and comprise typically of many early events and relatively few late ones. It is these features of the data that necessitate the special method survival analysis. The specific difficulties relating to survival analysis arise largely from the fact that only some individuals have experienced the event and, subsequently, survival times will be unknown for a subset of the study group. This phenomenon is called censoring and it may arise in the following ways: (a) a patient has not (yet) experienced the relevant outcome, such as relapse or death, by the time the study has to end; (b) a patient is lost to follow-up during the study period; (c) a patient experiences a different event that makes further follow-up impossible. Generally, censoring times may vary from individual to individual. Such censored survival time underestimated the true (but unknown) time to event. Visualising the survival process of an individual as a time-line, the event (assuming it is to occur) is beyond the end of the follow-up period. This situation is often called right censoring. Most survival data include right censored observation. In many biomedical and reliability studies, interest focuses on relating the time to event to a set of covariates. Cox proportional hazard model (Cox, 1972) has been established as the major framework for analysis of such survival data over the past three decades. But, often in practices, one primary goal of survival analysis is to extract meaningful subgroups of patients determined by the prognostic factors such as patient characteristics that are related to the level of disease. Although proportional hazard model and its extensions are powerful in studying the association between covariates and survival times, usually they are problematic in prognostic classification. One approach for classification is to compute a risk score based on the estimated coefficients from regression methods (Machin et al., 2006). This approach, however, may be problematic for several reasons. First, the definition of risk groups is arbitrary. Secondly, the risk score depends on the correct specification of the model. It is difficult to check whether the model is correct when many covariates are involved. Thirdly, when there are many interaction terms and the model becomes complicated, the result becomes difficult to interpret for the purpose of prognostic classification. Finally, a more serious problem is that an invalid prognostic group may be produced if no patient is included in a covariate profile. In contrast, DT methods do not suffer from these problems. Owing to the development of fast computers, computer-intensive methods such as DT methods have become popular. Since these investigate the significance of all potential risk factors automatically and provide interpretable models, they offer distinct advantages to analysts. Recently a large amount of DT methods have been developed for the analysis of survival data, where the basic concepts for growing and pruning trees remain unchanged, but the choice of the splitting criterion has been modified to incorporate the censored survival data. The application of DT methods for survival data are described by a number of authors (Gordon Olshen, 1985; Ciampi et al., 1986; Segal, 1988; Davis Anderson, 1989; Therneau et al., 1990; LeBlanc Crowley, 1992; LeBlanc Crowley, 1993; Ahn Loh, 1994; Bacchetti Segal, 1995; Huang et al., 1998; KeleÃ…Å ¸ Segal, 2002; Jin et al., 2004; Cappelli Zhang, 2007; Cho Hong, 2008), including the text by Zhang Singer (1999). 4. Decision Tree for Multivariate Censored Survival Data Multivariate survival data frequently arise when we faced the complexity of studies involving multiple treatment centres, family members and measurements repeatedly made on the same individual. For example, in multi-centre clinical trials, the outcomes for groups of patients at several centres are examined. In some instances, patients in a centre might exhibit similar responses due to uniformity of surroundings and procedures within a centre. This would result in correlated outcomes at the level of the treatment centre. For the situation of studies of family members or litters, correlation in outcome is likely for genetic reasons. In this case, the outcomes would be correlated at the family or litter level. Finally, when one person or animal is measured repeatedly over time, correlation will most definitely exist in those responses. Within the context of correlated data, the observations which are correlated for a group of individuals (within a treatment centre or a family) or for on e individual (because of repeated sampling) are referred to as a cluster, so that from this point on, the responses within a cluster will be assumed to be correlated. Analysis of multivariate survival data is complex due to the presence of dependence among survival times and unknown marginal distributions. Multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events. A successful treatment of correlated failure times was made by Clayton and Cuzik (1985) who modelled the dependence structure with a frailty term. Another approach is based on a proportional hazard formulation of the marginal hazard function, which has been studied by Wei et al. (1989) and Liang et al. (1993). Noticeably, Prentice et al. (1981) and Andersen Gill (1982) also suggested two alternative approaches to analyze multiple event times. Extension of tree techniques to multivariate censored data is motivated by the classification issue associated with multivariate survival data. For example, clinical investigators design studies to form prognostic rules. Credit risk analysts collect account information to build up credit scoring criteria. Frequently, in such studies the outcomes of ultimate interest are correlated times to event, such as relapses, late payments, or bankruptcies. Since DT methods recursively partition the predictor space, they are an alternative to conventional regression tools. This section is concerned with the generalization of DT models to multivariate survival data. In attempt to facilitate an extension of DT methods to multivariate survival data, more difficulties need to be circumvented. 4.1 Decision tree for multivariate survival data based on marginal model DT methods for multivariate survival data are not many. Almost all the multivariate DT methods have been based on between-node heterogeneity, with the exception of Molinaro et al. (2004) who proposed a general within-node homogeneity approach for both univariate and multivariate data. The multivariate methods proposed by Su Fan (2001, 2004) and Gao et al. (2004, 2006) concentrated on between-node heterogeneity and used the results of regression models. Specifically, for recurrent event data and clustered event data, Su Fan (2004) used likelihood-ratio tests while Gao et al. (2004) used robust Wald tests from a gamma frailty model to maximize the between-node heterogeneity. Su Fan (2001) and Fan et al. (2006) used a robust log-rank statistic while Gao et al. (2006) used a robust Wald test from the marginal failure-time model of Wei et al. (1989). The generalization of DT for multivariate survival data is developed by using goodness of split approach. DT by goodness of split is grown by maximizing a measure of between-node difference. Therefore, only internal nodes have associated two-sample statistics. The tree structure is different from CART because, for trees grown by minimizing within-node error, each node, either terminal or internal, has an associated impurity measure. This is why the CART pruning procedure is not directly applicable to such types of trees. However, the split-complexity pruning algorithm of LeBlanc Crowley (1993) has resulted in trees by goodness of split that has become well-developed tools. This modified tree technique not only provides a convenient way of handling survival data, but also enlarges the applied scope of DT methods in a more general sense. Especially for those situations where defining prediction error terms is relatively difficult, growing trees by a two-sample statistic, together with the split-complexity pruning, offers a feasible way of performing tree analysis. The DT procedure consists of three parts: a method to partition the data recursively into a large tree, a method to prune the large tree into a subtree sequence, and a method to determine the optimal tree size. In the multivariate survival trees, the between-node difference is measured by a robust Wald statistic, which is derived from a marginal approach to multivariate survival data that was developed by Wei et al. (1989). We used split-complexity pruning borrowed from LeBlanc Crowley (1993) and use test sample for determining the right tree size. 4.1.1 The splitting statistic We consider n independent subjects but each subject to have K potential types or number of failures. If there are an unequal number of failures within the subjects, then K is the maximum. We let Tik = min(Yik,Cik ) where Yik = time of the failure in the ith subject for the kth type of failure and Cik = potential censoring time of the ith subject for the kth type of failure with i = 1,†¦,n and k = 1,†¦,K. Then dik = I (Yik ≠¤ Cik) is the indicator for failure and the vector of covariates is denoted Zik = (Z1ik,†¦, Zpik)T. To partition the data, we consider the hazard model for the ith unit for the kth type of failure, using the distinguishable baseline hazard as described by Wei et al. (1989), namely where the indicator function I(Zik Parameter b is estimated by maximizing the partial likelihood. If the observations within the same unit are independent, the partial likelihood functions for b for the distinguishable baseline model (10) would be, (11) Since the observations within the same unit are not independent for multivariate failure time, we refer to the above functions as the pseudo-partial likelihood. The estimator can be obtained by maximizing the likelihood by solving . Wei et al. (1989) showed that is normally distributed with mean 0. However the usual estimate, a-1(b), for the variance of , where (12) is not valid. We refer to a-1(b) as the naà ¯ve estimator. Wei et al. (1989) showed that the correct estimated (robust) variance estimator of is (13) where b(b) is weight and d(b) is often referred to as the robust or sandwich variance estimator. Hence, the robust Wald statistic corresponding to the null hypothesis H0 : b = 0 is (14) 4.1.2 Tree growing To grow a tree, the robust Wald statistic is evaluated for every possible binary split of the predictor space Z. The split, s, could be of several forms: splits on a single covariate, splits on linear combinations of predictors, and boolean combination of splits. The simplest form of split relates to only one covariate, where the split depends on the type of covariate whether it is ordered or nominal covariate. The â€Å"best split† is defined to be the one corresponding to the maximum robust Wald statistic. Subsequently the data are divided into two groups according to the best split. Apply this splitting scheme recursively to the learning sample until the predictor space is partitioned into many regions. There will be no further partition to a node when any of the following occurs: The node contains less than, say 10 or 20, subjects, if the overall sample size is large enough to permit this. We suggest using a larger minimum node size than used in CART where the default value is 5; All the observed times in the subset are censored, which results in unavailability of the robust Wald statistic for any split; All the subjects have identical covariate vectors. Or the node has only complete observations with identical survival times. In these situations, the node is considered as pure. The whole procedure results in a large tree, which could be used for the purpose of data structure exploration. 4.1.3 Tree pruning Let T denote either a particular tree or the set of all its nodes. Let S and denote the set of internal nodes and terminal nodes of T, respectively. Therefore, . Also let |Ãâ€"| denote the number of nodes. Let G(h) represent the maximum robust Wald statistic on a particular (internal) node h. In order to measure the performance of a tree, a split-complexity measure Ga(T) is introduced as in LeBlanc and Crowley (1993). That is, (15) where the number of internal nodes, |S|, measures complexity; G(T) measures goodness of split in T; and the complexity parameter a acts as a penalty for each additional split. Start with the large tree T0 obtained from the splitting procedure. For any internal node h of T0, i.e. h ÃŽ S0, a function g(h) is defined as (16) where Th denotes the branch with h as its root and Sh is the set of all internal nodes of Th. Then the weakest link in T0 is the node such that   < Decision Tree for Prognostic Classification Decision Tree for Prognostic Classification Decision Tree for Prognostic Classification of Multivariate Survival Data and Competing Risks 1. Introduction Decision tree (DT) is one way to represent rules underlying data. It is the most popular tool for exploring complex data structures. Besides that it has become one of the most flexible, intuitive and powerful data analytic tools for determining distinct prognostic subgroups with similar outcome within each subgroup but different outcomes between the subgroups (i.e., prognostic grouping of patients). It is hierarchical, sequential classification structures that recursively partition the set of observations. Prognostic groups are important in assessing disease heterogeneity and for design and stratification of future clinical trials. Because patterns of medical treatment are changing so rapidly, it is important that the results of the present analysis be applicable to contemporary patients. Due to their mathematical simplicity, linear regression for continuous data, logistic regression for binary data, proportional hazard regression for censored survival data, marginal and frailty regression for multivariate survival data, and proportional subdistribution hazard regression for competing risks data are among the most commonly used statistical methods. These parametric and semiparametric regression methods, however, may not lead to faithful data descriptions when the underlying assumptions are not satisfied. Sometimes, model interpretation can be problematic in the presence of high-order interactions among predictors. DT has evolved to relax or remove the restrictive assumptions. In many cases, DT is used to explore data structures and to derive parsimonious models. DT is selected to analyze the data rather than the traditional regression analysis for several reasons. Discovery of interactions is difficult using traditional regression, because the interactions must be specified a priori. In contrast, DT automatically detects important interactions. Furthermore, unlike traditional regression analysis, DT is useful in uncovering variables that may be largely operative within a specific patient subgroup but may have minimal effect or none in other patient subgroups. Also, DT provides a superior means for prognostic classification. Rather than fitting a model to the data, DT sequentially divides the patient group into two subgroups based on prognostic factor values (e.g., tumor size The landmark work of DT in statistical community is the Classification and Regression Trees (CART) methodology of Breiman et al. (1984). A different approach was C4.5 proposed by Quinlan (1992). Original DT method was used in classification and regression for categorical and continuous response variable, respectively. In a clinical setting, however, the outcome of primary interest is often duration of survival, time to event, or some other incomplete (that is, censored) outcome. Therefore, several authors have developed extensions of original DT in the setting of censored survival data (Banerjee Noone, 2008). In science and technology, interest often lies in studying processes which generate events repeatedly over time. Such processes are referred to as recurrent event processes and the data they provide are called recurrent event data which includes in multivariate survival data. Such data arise frequently in medical studies, where information is often available on many individuals, each of whom may experience transient clinical events repeatedly over a period of observation. Examples include the occurrence of asthma attacks in respirology trials, epileptic seizures in neurology studies, and fractures in osteoporosis studies. In business, examples include the filing of warranty claims on automobiles, or insurance claims for policy holders. Since multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events, then further extensions of DT are developed for such kind of data. In some studies, patients may be simultaneously exposed to several events, each competing for their mortality or morbidity. For example, suppose that a group of patients diagnosed with heart disease is followed in order to observe a myocardial infarction (MI). If by the end of the study each patient was either observed to have MI or was alive and well, then the usual survival techniques can be applied. In real life, however, some patients may die from other causes before experiencing an MI. This is a competing risks situation because death from other causes prohibits the occurrence of MI. MI is considered the event of interest, while death from other causes is considered a competing risk. The group of patients dead of other causes cannot be considered censored, since their observations are not incomplete. The extension of DT can also be employed for competing risks survival time data. These extensions can make one apply the technique to clinical trial data to aid in the development of prognostic classifications for chronic diseases. This chapter will cover DT for multivariate and competing risks survival time data as well as their application in the development of medical prognosis. Two kinds of multivariate survival time regression model, i.e. marginal and frailty regression model, have their own DT extensions. Whereas, the extension of DT for competing risks has two types of tree. First, the â€Å"single event† DT is developed based on splitting function using one event only. Second, the â€Å"composite events† tree which use all the events jointly. 2. Decision Tree A DT is a tree-like structure used for classification, decision theory, clustering, and prediction functions. It depicts rules for dividing data into groups based on the regularities in the data. A DT can be used for categorical and continuous response variables. When the response variables are continuous, the DT is often referred to as a regression tree. If the response variables are categorical, it is called a classification tree. However, the same concepts apply to both types of trees. DTs are widely used in computer science for data structures, in medical sciences for diagnosis, in botany for classification, in psychology for decision theory, and in economic analysis for evaluating investment alternatives. DTs learn from data and generate models containing explicit rule-like relationships among the variables. DT algorithms begin with the entire set of data, split the data into two or more subsets by testing the value of a predictor variable, and then repeatedly split each subset into finer subsets until the split size reaches an appropriate level. The entire modeling process can be illustrated in a tree-like structure. A DT model consists of two parts: creating the tree and applying the tree to the data. To achieve this, DTs use several different algorithms. The most popular algorithm in the statistical community is Classification and Regression Trees (CART) (Breiman et al., 1984). This algorithm helps DTs gain credibility and acceptance in the statistics community. It creates binary splits on nominal or interval predictor variables for a nominal, ordinal, or interval response. The most widely-used algorithms by computer scientists are ID3, C4.5, and C5.0 (Quinlan, 1993). The first version of C4.5 and C5.0 were limited to categorical predictors; however, the most recent versions are similar to CART. Other algorithms include Chi-Square Automatic Interaction Detection (CHAID) for categorical response (Kass, 1980), CLS, AID, TREEDISC, Angoss KnowledgeSEEKER, CRUISE, GUIDE and QUEST (Loh, 2008). These algorithms use different approaches for splitting variables. CART, CRUISE, GUIDE and QUEST use the sta tistical approach, while CLS, ID3, and C4.5 use an approach in which the number of branches off an internal node is equal to the number of possible categories. Another common approach, used by AID, CHAID, and TREEDISC, is the one in which the number of nodes on an internal node varies from two to the maximum number of possible categories. Angoss KnowledgeSEEKER uses a combination of these approaches. Each algorithm employs different mathematical processes to determine how to group and rank variables. Let us illustrate the DT method in a simplified example of credit evaluation. Suppose a credit card issuer wants to develop a model that can be used for evaluating potential candidates based on its historical customer data. The companys main concern is the default of payment by a cardholder. Therefore, the model should be able to help the company classify a candidate as a possible defaulter or not. The database may contain millions of records and hundreds of fields. A fragment of such a database is shown in Table 1. The input variables include income, age, education, occupation, and many others, determined by some quantitative or qualitative methods. The model building process is illustrated in the tree structure in 1. The DT algorithm first selects a variable, income, to split the dataset into two subsets. This variable, and also the splitting value of $31,000, is selected by a splitting criterion of the algorithm. There exist many splitting criteria (Mingers, 1989). The basic principle of these criteria is that they all attempt to divide the data into clusters such that variations within each cluster are minimized and variations between the clusters are maximized. The follow- Name Age Income Education Occupation Default Andrew 42 45600 College Manager No Allison 26 29000 High School Self Owned Yes Sabrina 58 36800 High School Clerk No Andy 35 37300 College Engineer No †¦ Table 1. Partial records and fields of a database table for credit evaluation up splits are similar to the first one. The process continues until an appropriate tree size is reached. 1 shows a segment of the DT. Based on this tree model, a candidate with income at least $31,000 and at least college degree is unlikely to default the payment; but a self-employed candidate whose income is less than $31,000 and age is less than 28 is more likely to default. We begin with a discussion of the general structure of a popular DT algorithm in statistical community, i.e. CART model. A CART model describes the conditional distribution of y given X, where y is the response variable and X is a set of predictor variables (X = (X1,X2,†¦,Xp)). This model has two main components: a tree T with b terminal nodes, and a parameter Q = (q1,q2,†¦, qb) ÃÅ' Rk which associates the parameter values qm, with the mth terminal node. Thus a tree model is fully specified by the pair (T, Q). If X lies in the region corresponding to the mth terminal node then y|X has the distribution f(y|qm), where we use f to represent a conditional distribution indexed by qm. The model is called a regression tree or a classification tree according to whether the response y is quantitative or qualitative, respectively. 2.1 Splitting a tree The DT T subdivides the predictor variable space as follows. Each internal node has an associated splitting rule which uses a predictor to assign observations to either its left or right child node. The internal nodes are thus partitioned into two subsequent nodes using the splitting rule. For quantitative predictors, the splitting rule is based on a split rule c, and assigns observations for which {xi For a regression tree, conventional algorithm models the response in each region Rm as a constant qm. Thus the overall tree model can be expressed as (Hastie et al., 2001): (1) where Rm, m = 1, 2,†¦,b consist of a partition of the predictors space, and therefore representing the space of b terminal nodes. If we adopt the method of minimizing the sum of squares as our criterion to characterize the best split, it is easy to see that the best , is just the average of yi in region Rm: (2) where Nm is the number of observations falling in node m. The residual sum of squares is (3) which will serve as an impurity measure for regression trees. If the response is a factor taking outcomes 1,2, K, the impurity measure Qm(T), defined in (3) is not suitable. Instead, we represent a region Rm with Nm observations with (4) which is the proportion of class k(k ÃŽ {1, 2,†¦,K}) observations in node m. We classify the observations in node m to a class , the majority class in node m. Different measures Qm(T) of node impurity include the following (Hastie et al., 2001): Misclassification error: Gini index: Cross-entropy or deviance: (5) For binary outcomes, if p is the proportion of the second class, these three measures are 1 max(p, 1 p), 2p(1 p) and -p log p (1 p) log(1 p), respectively. All three definitions of impurity are concave, having minimums at p = 0 and p = 1 and a maximum at p = 0.5. Entropy and the Gini index are the most common, and generally give very similar results except when there are two response categories. 2.2 Pruning a tree To be consistent with conventional notations, lets define the impurity of a node h as I(h) ((3) for a regression tree, and any one in (5) for a classification tree). We then choose the split with maximal impurity reduction (6) where hL and hR are the left and right children nodes of h and p(h) is proportion of sample fall in node h. How large should we grow the tree then? Clearly a very large tree might overfit the data, while a small tree may not be able to capture the important structure. Tree size is a tuning parameter governing the models complexity, and the optimal tree size should be adaptively chosen from the data. One approach would be to continue the splitting procedures until the decrease on impurity due to the split exceeds some threshold. This strategy is too short-sighted, however, since a seeming worthless split might lead to a very good split below it. The preferred strategy is to grow a large tree T0, stopping the splitting process when some minimum number of observations in a terminal node (say 10) is reached. Then this large tree is pruned using pruning algorithm, such as cost-complexity or split complexity pruning algorithm. To prune large tree T0 by using cost-complexity algorithm, we define a subtree T T0 to be any tree that can be obtained by pruning T0, and define to be the set of terminal nodes of T. That is, collapsing any number of its terminal nodes. As before, we index terminal nodes by m, with node m representing region Rm. Let denotes the number of terminal nodes in T (= b). We use instead of b following the conventional notation and define the risk of trees and define cost of tree as Regression tree: , Classification tree: , (7) where r(h) measures the impurity of node h in a classification tree (can be any one in (5)). We define the cost complexity criterion (Breiman et al., 1984) (8) where a(> 0) is the complexity parameter. The idea is, for each a, find the subtree Ta T0 to minimize Ra(T). The tuning parameter a > 0 governs the tradeoff between tree size and its goodness of fit to the data (Hastie et al., 2001). Large values of a result in smaller tree Ta and conversely for smaller values of a. As the notation suggests, with a = 0 the solution is the full tree T0. To find Ta we use weakest link pruning: we successively collapse the internal node that produces the smallest per-node increase in R(T), and continue until we produce the single-node (root) tree. This gives a (finite) sequence of subtrees, and one can show this sequence must contains Ta. See Brieman et al. (1984) and Ripley (1996) for details. Estimation of a () is achieved by five- or ten-fold cross-validation. Our final tree is then denoted as . It follows that, in CART and related algorithms, classification and regression trees are produced from data in two stages. In the first stage, a large initial tree is produced by splitting one node at a time in an iterative, greedy fashion. In the second stage, a small subtree of the initial tree is selected, using the same data set. Whereas the splitting procedure proceeds in a top-down fashion, the second stage, known as pruning, proceeds from the bottom-up by successively removing nodes from the initial tree. Theorem 1 (Brieman et al., 1984, Section 3.3) For any value of the complexity parameter a, there is a unique smallest subtree of T0 that minimizes the cost-complexity. Theorem 2 (Zhang Singer, 1999, Section 4.2) If a2 > al, the optimal sub-tree corresponding to a2 is a subtree of the optimal subtree corresponding to al. More general, suppose we end up with m thresholds, 0 (9) where means that is a subtree of . These are called nested optimal subtrees. 3. Decision Tree for Censored Survival Data Survival analysis is the phrase used to describe the analysis of data that correspond to the time from a well-defined time origin until the occurrence of some particular events or end-points. It is important to state what the event is and when the period of observation starts and finish. In medical research, the time origin will often correspond to the recruitment of an individual into an experimental study, and the end-point is the death of the patient or the occurrence of some adverse events. Survival data are rarely normally distributed, but are skewed and comprise typically of many early events and relatively few late ones. It is these features of the data that necessitate the special method survival analysis. The specific difficulties relating to survival analysis arise largely from the fact that only some individuals have experienced the event and, subsequently, survival times will be unknown for a subset of the study group. This phenomenon is called censoring and it may arise in the following ways: (a) a patient has not (yet) experienced the relevant outcome, such as relapse or death, by the time the study has to end; (b) a patient is lost to follow-up during the study period; (c) a patient experiences a different event that makes further follow-up impossible. Generally, censoring times may vary from individual to individual. Such censored survival time underestimated the true (but unknown) time to event. Visualising the survival process of an individual as a time-line, the event (assuming it is to occur) is beyond the end of the follow-up period. This situation is often called right censoring. Most survival data include right censored observation. In many biomedical and reliability studies, interest focuses on relating the time to event to a set of covariates. Cox proportional hazard model (Cox, 1972) has been established as the major framework for analysis of such survival data over the past three decades. But, often in practices, one primary goal of survival analysis is to extract meaningful subgroups of patients determined by the prognostic factors such as patient characteristics that are related to the level of disease. Although proportional hazard model and its extensions are powerful in studying the association between covariates and survival times, usually they are problematic in prognostic classification. One approach for classification is to compute a risk score based on the estimated coefficients from regression methods (Machin et al., 2006). This approach, however, may be problematic for several reasons. First, the definition of risk groups is arbitrary. Secondly, the risk score depends on the correct specification of the model. It is difficult to check whether the model is correct when many covariates are involved. Thirdly, when there are many interaction terms and the model becomes complicated, the result becomes difficult to interpret for the purpose of prognostic classification. Finally, a more serious problem is that an invalid prognostic group may be produced if no patient is included in a covariate profile. In contrast, DT methods do not suffer from these problems. Owing to the development of fast computers, computer-intensive methods such as DT methods have become popular. Since these investigate the significance of all potential risk factors automatically and provide interpretable models, they offer distinct advantages to analysts. Recently a large amount of DT methods have been developed for the analysis of survival data, where the basic concepts for growing and pruning trees remain unchanged, but the choice of the splitting criterion has been modified to incorporate the censored survival data. The application of DT methods for survival data are described by a number of authors (Gordon Olshen, 1985; Ciampi et al., 1986; Segal, 1988; Davis Anderson, 1989; Therneau et al., 1990; LeBlanc Crowley, 1992; LeBlanc Crowley, 1993; Ahn Loh, 1994; Bacchetti Segal, 1995; Huang et al., 1998; KeleÃ…Å ¸ Segal, 2002; Jin et al., 2004; Cappelli Zhang, 2007; Cho Hong, 2008), including the text by Zhang Singer (1999). 4. Decision Tree for Multivariate Censored Survival Data Multivariate survival data frequently arise when we faced the complexity of studies involving multiple treatment centres, family members and measurements repeatedly made on the same individual. For example, in multi-centre clinical trials, the outcomes for groups of patients at several centres are examined. In some instances, patients in a centre might exhibit similar responses due to uniformity of surroundings and procedures within a centre. This would result in correlated outcomes at the level of the treatment centre. For the situation of studies of family members or litters, correlation in outcome is likely for genetic reasons. In this case, the outcomes would be correlated at the family or litter level. Finally, when one person or animal is measured repeatedly over time, correlation will most definitely exist in those responses. Within the context of correlated data, the observations which are correlated for a group of individuals (within a treatment centre or a family) or for on e individual (because of repeated sampling) are referred to as a cluster, so that from this point on, the responses within a cluster will be assumed to be correlated. Analysis of multivariate survival data is complex due to the presence of dependence among survival times and unknown marginal distributions. Multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events. A successful treatment of correlated failure times was made by Clayton and Cuzik (1985) who modelled the dependence structure with a frailty term. Another approach is based on a proportional hazard formulation of the marginal hazard function, which has been studied by Wei et al. (1989) and Liang et al. (1993). Noticeably, Prentice et al. (1981) and Andersen Gill (1982) also suggested two alternative approaches to analyze multiple event times. Extension of tree techniques to multivariate censored data is motivated by the classification issue associated with multivariate survival data. For example, clinical investigators design studies to form prognostic rules. Credit risk analysts collect account information to build up credit scoring criteria. Frequently, in such studies the outcomes of ultimate interest are correlated times to event, such as relapses, late payments, or bankruptcies. Since DT methods recursively partition the predictor space, they are an alternative to conventional regression tools. This section is concerned with the generalization of DT models to multivariate survival data. In attempt to facilitate an extension of DT methods to multivariate survival data, more difficulties need to be circumvented. 4.1 Decision tree for multivariate survival data based on marginal model DT methods for multivariate survival data are not many. Almost all the multivariate DT methods have been based on between-node heterogeneity, with the exception of Molinaro et al. (2004) who proposed a general within-node homogeneity approach for both univariate and multivariate data. The multivariate methods proposed by Su Fan (2001, 2004) and Gao et al. (2004, 2006) concentrated on between-node heterogeneity and used the results of regression models. Specifically, for recurrent event data and clustered event data, Su Fan (2004) used likelihood-ratio tests while Gao et al. (2004) used robust Wald tests from a gamma frailty model to maximize the between-node heterogeneity. Su Fan (2001) and Fan et al. (2006) used a robust log-rank statistic while Gao et al. (2006) used a robust Wald test from the marginal failure-time model of Wei et al. (1989). The generalization of DT for multivariate survival data is developed by using goodness of split approach. DT by goodness of split is grown by maximizing a measure of between-node difference. Therefore, only internal nodes have associated two-sample statistics. The tree structure is different from CART because, for trees grown by minimizing within-node error, each node, either terminal or internal, has an associated impurity measure. This is why the CART pruning procedure is not directly applicable to such types of trees. However, the split-complexity pruning algorithm of LeBlanc Crowley (1993) has resulted in trees by goodness of split that has become well-developed tools. This modified tree technique not only provides a convenient way of handling survival data, but also enlarges the applied scope of DT methods in a more general sense. Especially for those situations where defining prediction error terms is relatively difficult, growing trees by a two-sample statistic, together with the split-complexity pruning, offers a feasible way of performing tree analysis. The DT procedure consists of three parts: a method to partition the data recursively into a large tree, a method to prune the large tree into a subtree sequence, and a method to determine the optimal tree size. In the multivariate survival trees, the between-node difference is measured by a robust Wald statistic, which is derived from a marginal approach to multivariate survival data that was developed by Wei et al. (1989). We used split-complexity pruning borrowed from LeBlanc Crowley (1993) and use test sample for determining the right tree size. 4.1.1 The splitting statistic We consider n independent subjects but each subject to have K potential types or number of failures. If there are an unequal number of failures within the subjects, then K is the maximum. We let Tik = min(Yik,Cik ) where Yik = time of the failure in the ith subject for the kth type of failure and Cik = potential censoring time of the ith subject for the kth type of failure with i = 1,†¦,n and k = 1,†¦,K. Then dik = I (Yik ≠¤ Cik) is the indicator for failure and the vector of covariates is denoted Zik = (Z1ik,†¦, Zpik)T. To partition the data, we consider the hazard model for the ith unit for the kth type of failure, using the distinguishable baseline hazard as described by Wei et al. (1989), namely where the indicator function I(Zik Parameter b is estimated by maximizing the partial likelihood. If the observations within the same unit are independent, the partial likelihood functions for b for the distinguishable baseline model (10) would be, (11) Since the observations within the same unit are not independent for multivariate failure time, we refer to the above functions as the pseudo-partial likelihood. The estimator can be obtained by maximizing the likelihood by solving . Wei et al. (1989) showed that is normally distributed with mean 0. However the usual estimate, a-1(b), for the variance of , where (12) is not valid. We refer to a-1(b) as the naà ¯ve estimator. Wei et al. (1989) showed that the correct estimated (robust) variance estimator of is (13) where b(b) is weight and d(b) is often referred to as the robust or sandwich variance estimator. Hence, the robust Wald statistic corresponding to the null hypothesis H0 : b = 0 is (14) 4.1.2 Tree growing To grow a tree, the robust Wald statistic is evaluated for every possible binary split of the predictor space Z. The split, s, could be of several forms: splits on a single covariate, splits on linear combinations of predictors, and boolean combination of splits. The simplest form of split relates to only one covariate, where the split depends on the type of covariate whether it is ordered or nominal covariate. The â€Å"best split† is defined to be the one corresponding to the maximum robust Wald statistic. Subsequently the data are divided into two groups according to the best split. Apply this splitting scheme recursively to the learning sample until the predictor space is partitioned into many regions. There will be no further partition to a node when any of the following occurs: The node contains less than, say 10 or 20, subjects, if the overall sample size is large enough to permit this. We suggest using a larger minimum node size than used in CART where the default value is 5; All the observed times in the subset are censored, which results in unavailability of the robust Wald statistic for any split; All the subjects have identical covariate vectors. Or the node has only complete observations with identical survival times. In these situations, the node is considered as pure. The whole procedure results in a large tree, which could be used for the purpose of data structure exploration. 4.1.3 Tree pruning Let T denote either a particular tree or the set of all its nodes. Let S and denote the set of internal nodes and terminal nodes of T, respectively. Therefore, . Also let |Ãâ€"| denote the number of nodes. Let G(h) represent the maximum robust Wald statistic on a particular (internal) node h. In order to measure the performance of a tree, a split-complexity measure Ga(T) is introduced as in LeBlanc and Crowley (1993). That is, (15) where the number of internal nodes, |S|, measures complexity; G(T) measures goodness of split in T; and the complexity parameter a acts as a penalty for each additional split. Start with the large tree T0 obtained from the splitting procedure. For any internal node h of T0, i.e. h ÃŽ S0, a function g(h) is defined as (16) where Th denotes the branch with h as its root and Sh is the set of all internal nodes of Th. Then the weakest link in T0 is the node such that   <