Friday, September 6, 2019

Financial Analysis of Cadbury Schweppes Essay Example for Free

Financial Analysis of Cadbury Schweppes Essay The capital structure of Cadbury Schweppes based on its 2006 balance sheet shows that the company uses more debt than equity to finance its operations. The company’s debt to total stockholders equity ratio of the company is more than fifty percent, while its debt to equity ratio is at 1. 30. A high debt to equity ratio means that the company relies heavily in debt financing. A high debt to equity ratio does not necessarily mean that the company has poor financial leverage because there are industries that are capital intensive which requires companies to incur large amounts of debt to finance its operations. One such industry is the automobile industry, where a debt to equity ratio of two is still considered acceptable. In the case of Cadbury Schweppes, the company is engaged in manufacturing candy, chocolate and drinks. It is an industry which is not as capital intensive as the car manufacturing industry so its debt to equity ratio maybe too high. The company has been undergoing changes in its operations over the years. It has gradually moved out of its investments that do not fall within its core business which is confectionery and beverage. While it disposed of some of its incompatible businesses, it continued to expand its confectionery and beverage operations. These acquisitions, particularly those made in the United States can be the reason for its large debt. Debt is used by the company to increase its operations and, as a consequence, increase its profits. The company’s performance has been increasingly growing every year, so it is possible that the company has determined that the cost of expending the operations which is in the form of interest payments is much lower than the benefits incurred in the form of increase in sales. Having a large amount of is extremely detrimental to the company if it is unable to recoup the cost of the debt; this is not the case of Cadbury Schweppes. The dividend yield ratio measures the amount of income received by each share of stock with the cost of such share. The dividend yield ratio necessarily varies over time because the market value of share changes as it is traded. A comparison of dividend yield ratio over time can be used to gauge if the performance of the company is improving, but this ratio should not be analyzed on its own. It must be analyzed together with other factors such as the market value of the share. A company with a low dividend yield can mean that the company’s share is priced highly by the market and does not necessarily mean that the company is unable to make dividend payments. On the other hand, high dividend yield can mean that the company’s share has a very low market value and not because it is able to give its shareholders large amounts of dividends. The company has a dividend yield of 2. 30% and it share has a market value ranging from 51. 5 to 51. 6. Based on this figures, it is apparent that its dividend yield is not because of the extremely high or low market value of its share. The price/earnings ratio of the company, on the other hand, is seen by investors as a gauge of how much the market values the company’s share. In this company’s case, it has a price earning of 24. 22. This number is very close to the industry’s average. This means that the company is competitive with other members of the industry and is generally viewed by the investing community as a good investment. Based on its dividend yield and price/earnings ratio, the company is able to compensate stockholders despite its large debts. This is probably because the earnings of the company is divided by a smaller number of shares than if the company chose to finance its operation by equity rather than debt. The large shareholders of the company are Franklin Resources, Inc. and Legal and General with shares ownership amounting to 4. 01% and 3. 47%.

Thursday, September 5, 2019

Pathophysiology Of Dvt Formation Health And Social Care Essay

Pathophysiology Of Dvt Formation Health And Social Care Essay DVT is the result of a number of factors that include stasis of blood, endothelial injury and hypercoagulability of blood. PE is a major complication of DVT and occurs when a thrombus or blood clot detaches itself and is carried by the blood stream to the lungs. [J32] Proximal DVT carries a higher risk of PE than distal DVT. [J30, Havig] We focused on proximal DVT because it is much more reliably detected by ultrasonography and is considered to be clinically more important. [J53: 11,12, à §Ã¢â‚¬Å¾Ã‚ ¡Ãƒ ¨Ã‚ Ã‚ ½K list,à §Ã…“Å ¸Ãƒ ¤Ã‚ ¿Ã¢â‚¬Å¡Ãƒ §Ã¢â‚¬ Ã‚ ¨Ãƒ ¥Ã¢â‚¬ ¦Ã‹â€ Ãƒ ¦Ã¢â‚¬ Ã‚ ¹] DVT can occur in any veins. (near neck, etc.) However, it is not including in this literature review becauseà ¢Ã¢â€š ¬Ã‚ ¦ Upper limb DVT is being reported, particularly associated with central venous catheters. (K66, from J20:54) After a stroke, blood clots can form in the veins of the legs (deep vein thrombosis, or DVT). These clots can break off and be carried in the blood stream to the heart and lungs (causing pulmonary embolism). This can be life threatening. [J30] Deep venous thrombosis may lead to pulmonary emboli, a frequent cause of avoidable deaths. [K52, from J53:1] Virchows triad The pathophysiological mechanisms underlying DVT include venous stasis and hypercoagulability linked to an increase in thrombin formation and platelet hyperactivity (Virchow 1858). [J30] The occurrence of one or more factors of Virchows triad (stasis of blood, endothelial injury and hypercoagulability of blood) in the venous system often leads to deep vein thrombosis (DVT) (Virchow 1858). [J18] DVT = PE = (à ¥Ã‚ ¦Ã¢â‚¬Å¡Ãƒ ¨Ã‚ ¦Ã‚ Ãƒ ¦Ã¢â‚¬ °Ã‚ ¾PEà §Ã… ¡Ã¢â‚¬Å¾incidence mortality rate (acute + Rehab) J43 P263 have) Lower extremity DVT can be anatomically be divided into proximal DVT involving the popliteal vein and proximal veins or distal DVT involving the calf vein and distal veins. [J59] DVT in the paralyzed legs of patients with stroke was reported as early as 1810 by Ferriar and again by Lobstein in 1833. [J45] Pathophysiology of DVT formation According to the Medsurg, Venous return is aided by the calf muscle pump. When the legs are inactive or the pump is ineffective, blood pools by gravity in the veins. Thrombus development is a local process. It begins by platelet adherence to the endothelium. Several factors promote platelet aggregation, including thrombin, fibrin, activated factor X, and catecholamines. In addition, where the platelets adhere to collagen, adenosine diphosphate (ADP) is released. ADP is also released from the damaged tissues and disrupted platelets. ADP produces platelet aggregation that results in a platelet plug. Deep vein thrombi vary from 1mm in diameter to long tubular masses filing main veins. Small thrombi are found commonly in the pocket of deep vein valves. As thrombi become larger in diameter and length, they obstruct the veins, the resulting inflammatory process can destroy the valves of the veins; thus; venous insufficiency and postphlebitic syndrome are initiated. Newly formed thrombi may become pulmonary emboli. Probably 24 to 48 hours after formation, thrombi undergo lysis or become organized and adhere to the vessel wall. Lysis diminishes the risk of embolization. Pulmonary emboli, most of which start as thrombi in the large deep veins of the leg, are an acute and potentially lethal complication of DVT. Venous thrombosis is the process of clot (thrombus) formation within veins. Although this can occur in any venous system, the predominant clinical events occur in the vessels of the leg, giving rise to deep vein thrombosis, or in the lungs, resulting in a pulmonary embolus (PE). [J56] In fact, about 90% of DVT are of the ascending type. The potential for embolism depends on the speed and the extent of the dynamic, ascending clot growing process. Almost all clinical PE originate from distal DVT. Only the remaining 10% are derived from clots without connection to the lower leg veins (e.g. isolated iliac vein thrombosis, transfascial great or small saphenous vein thrombosis, subclavian vein thrombosis, or catheter-related thrombosis). [J58] Damage to the epithelial cell lining of the blood vessel is one of the extrinsic factors triggering the clotting cascade. The damaged endothelium attempts to maintain vascular integrity by adhesion and aggregation of platelets. As the clotting cascade continues, the final step is the formation of thrombin, which leads to the conversion of fibrinogen to fibrin and the formation of a fibrin clot. (Arcangelo Peterson, 2006) (from K84, J40: Arcangelo) Abnormal blood clots that adhere to the vessel wall are known as thrombi. These are composed of blood cells, platelets, and fibrin. Arterial thrombi are composed mainly of platelet aggregates and fibrin. Venous thrombi are composed of mainly red blood cells. The difference in composition is caused by the conditions in which the thrombus forms. In the artery, the blood flow is high in comparison with the low flow conditions in the vein. The thrombus may become large enough to interfere with blood flow within the vein or artery. (Mansen McCance, 2002) (from K85, J40: Mansen) If the thrombus detaches from the vessel wall, it becomes an embolus. This mobile clot travels thought the circulation until it lodges in a blood vessel that is smaller than the clot. Distal to this point, blood flow is blocked and tissues or organs are deprived of oxygen and nutrition. (Mansen McCance, 2002). The signs and symptoms associated with an embolus depend on the vein or artery where th clot becomes lodged. (from K85, J40: Mansen) In 1856, Virchow described the factors that predispose to venous thrombosis, including stasis, vascular damage, and hypercoagulability. These three factors are referred to as Virchows triad. Stasis of blood may occur because of immobility, age, obesity, or disease processes. Trauma (including surgery), intravenous (IV) cannulation, medications, and toxins are some of the many sources that may precipitate vascular damage. Hypercoagulability of the blood may be caused by various disease processes and medications. (Mansen McCance, 2002) (from K85, J40: Mansen) Why focus on DVT rather than PE and VTE? A high proportion of patients with DVT also have subclinical PE. [K15, from J45:14] Most of the PE results from DVT (please find literature to support) Since lower limb DVT is the major origin of PE, and the characteristic of prolong bed rest of stroke, this literature review will mainly focus on the DVT at lower limbs. Approximately two thirds of these are below-knee DVTs, in contrast to unselected (nonstroke) patients presenting with symptomatic DVT, in whom the majority are proximal. [J43] Most studies show that PE seems to be much more common in patients with proximal and symptomatic DVT. [K41, from J46:1] Clinical symptoms of DVT were developed by six patients (oedema or pain of the lower extremity, no cases of PE). (out of 28, =21.4%) (J48s result) Why stroke patient easy to have DVT The general stroke population is at risk for DVT because of the following factors. First, there is an alteration in blood flow due to weakness in the lower limb and a resulting hypercoagulable state related to changes in the blood. Second, vessel wall intimal injury occurs related to changes in blood and blood flow. Stroke patients may also have similar symptoms associated with DVT, such as swelling and Homans sign, that may be misinterpreted as being related to the stroke. [J50] Stroke patients are often bed-ridden, especially during the acute phase, because of paresis. [J50] Most of the stroke patients are elderly. (age > ), while aging is a significant factors of the occurrence of DVT. Patients with stroke are at particular risk for developing deep venous thrombosis (DVT) and pulmonary embolism (PE) because of limb paralysis, prolonged bed rest, and increased prothrombotic activity. [J45 (also code at J51)] Sioson et al. [46] reported 19 DVT events in the paretic limb, nine bilateral events and four contralateral in 32 patients prospectively followed. (K49 from J46:46) Why important to prevent WHO estimates that 15 million people have a stroke every year, and this number is rising. (K91, from J39:2) Venous thromboembolism is a common but preventable complication of acute ischaemic stroke, and is associated with increased mortality and long-term morbidity and substantial health-care costs for its management. (K92, from J39:6) Without venous thromboembolism prophylaxis, up to 75% of patients with hemiplegia after stroke develop deep vein thrombosis and 20% develop pulmonary embolism, (K93, from J39:8) which is fatal in 1-2% of patients with acute ischaemic stroke and causes up to 25% of early deaths after strokes. (K94, from J39:9) low molecular weight heparin and unfractionated heparin are therefore recommended in guidelines from expert consensus groups.10-14 (K95, from J39:10-14) The best treatment for VTE is prevention. [J34] Cause preventable death [J06] Deep venous thromboembolism (DVT) is an important health issue in the hospitalized patients that leads to increased length of stay, morbidity, and mortality. [J50] Early detection of DVT is important because of the risk of pulmonary embolism and its potentially fatal consequences. However, it is well known that clinical features of DVT and PE are notoriously nonspecific. [J09] Despite improvements in prevention (SPARCL 2006), little progress has been made in treating stroke with specific interventions once it has occurred. (K72, from J44) the occurrence of venous thromboembolism was about two-fold higher in patients with an NIHSS score of 14 or more than in those with a score less than 14 (in line with previous studies25) (K99, from J39:25 + J39self) Patients with intracerebral hemorrhage (ICH) or ischemic stroke are at high risk for development of venous thromboembolism (VTE). (K103, from J29:1) In comparison to patients with ischemic stroke, the risk for VTE is higher in the hemorrhagic stroke population. (K104, from J29:2) Without preventative measures, 53% and 16% of immobilized patients develop deep venous thrombosis (DVT) or pulmonary embolism (PE), respectively, in this population. (K105, from J29:3) One study detected DVT in 40% of patients with ICH within 2 weeks and 1.9% of those patients had a PE.4 (K106, from J29:4) Development of VTE in the patient with ICH adds further detrimental complications to an already lethal disease with a 1-month case-fatality rate of 35% to 52%.5 (K107, from J29:5) DVT also prolongs the length of hospital stays, delays rehabilitation programs, and introduces a potential risk for PE. (K108, from J29:6) DVT prolongs hospitalization and increases healthcare costs. [J01] DVT is the pathophysiological precursor of pulmonary embolism (PE). However, half of the DVT cases were asymptomatic. [J01, K1 from J37:18, J37,J27]. Approximately one third of patients with symptomatic venous thromboembolism (VTE) manifest pulmonary embolism (PE), whereas two thirds manifest deep vein thrombosis (DVT) alone. Moreover, death occurs in 6% of DVT cases and 12% of PE cases within 1 month of diagnosis. [J46, J27] Clinically apparent DVT was reported in 1.7% to 5.0% of patients with stroke. Subclinical DVT occurred in 28% to 73% of patients with stroke, usually in the paralyzed limb. [J45] The frequency of asymptomatic PE in patients with DVT to be 40%. [J50] Prevention of VTE is highly effective in lowering the morbidity and mortality rate of stroke patients since PE accounts for up to 25% of post-stroke early deaths. [J43] Bounds JV, Wiebers DO, Whisnant JP, Okazaki H: Mechanisms and timing of deaths from cerebral infarction. Stroke 1981, 12:474-477.The rate of PE is likely to be underestimated because they are not routinely screened for, and autopsies are rarely performed. Fifty percent of patients who die following an acute stroke showed evidence of PE on autopsy. [K68, from J13:7] The annual incidence of DVT in the general population is estimated to be about 1 per 1000 (8), however, it should be noted that much of the published data are derived from patients who present with symptoms at medical institutions. Diagnosis of DVT has traditionally been based on clinical presentation, however, evidence from postmortem studies indicates that a substantial proportion of VTE cases are asymptomatic. [K10 from J55] Clinically apparent DVT confirmed on investigation is less common but DVTs may not be recognised and may still cause important complications. Pulmonary embolism (PE) is an important cause of preventable death after stroke [K67, from J13:4]

Wednesday, September 4, 2019

The Underwater Wireless Communications Information Technology Essay

The Underwater Wireless Communications Information Technology Essay Wireless communication technology today has become part of our daily life; the idea of wireless undersea communications may still seem far-fetched. However, research has been active for over a decade on designing the methods for wireless information transmission underwater. The major discoveries of the past decades, has motivated researches to carry out better and efficient ways to enable unexplored applications and to enhance our ability to observe and predict the ocean. The purpose of this paper is to introduce to the readers the basic concepts, architecture, protocols and modems used in underwater wireless communications. The paper also presents the difficulties faced in terms of power management and security, and the latest developments in the underwater wireless industry. Towards the end, we also discuss a wide range of applications of underwater wireless communication. Index Terms: Underwater Wireless Communication (UWCs), Medium Access Control (MAC), Underwater Acoustic Sensor Networks (UAWSNs). I. INTRODUCTION In last several years, underwater sensor network (UWSN) has found an increasing use in a wide range of applications, such as coastal surveillance systems, environmental research, autonomous underwater vehicle (AUV) operation, many civilian and military applications such as oceanographic data collection, scientific ocean sampling, pollution, environmental monitoring, climate recording, offshore exploration, disaster prevention, assisted navigation, distributed tactical surveillance, and mine reconnaissance. By deploying a distributed and scalable sensor network in a 3-dimensional underwater space, each underwater sensor can monitor and detect environmental parameters and events locally. Hence, compared with remote sensing, UWSNs provide a better sensing and surveillance technology to acquire better data to understand the spatial and temporal complexities of underwater environments. Some of these applications can be supported by underwater acoustic sensor networks (UWASNs), which consist of devices with sensing, processing, and communication capabilities that are deployed to perform collaborative monitoring tasks. Fig 1 gives a generalized diagram of an UWASN. Wireless signal transmission is also crucial to remotely control instruments in ocean observatories and to enable coordination of swarms of autonomous underwater vehicles (AUVs) and robots, which will play the role of mobile nodes in future ocean observation networks by virtue of their flexibility and reconfigurability. Present underwater communication systems involve the transmission of information in the form of sound, electromagnetic (EM), or optical waves. Each of these techniques has advantages and limitations. Acoustic communication is the most versatile and widely used technique in underwater environments due to the low attenuation (signal reduction) of sound in water. This is especially true in thermally stable, deep water settings. On the other hand, the use of acoustic waves in shallow water can be adversely affected by temperature gradients, surface ambient noise, and multipath propagation due to reflection and refraction. The much slower speed of acoustic propagation in water, about 1500 m/s (meters per second), compared with that of electromagnetic and optical waves, is another limiting factor for efficient communication and networking. Nevertheless, the currently favorable technology for underwater communication is upon acoustics. On the front of using electromagnetic (EM) waves in radio frequencies, conventional radio Figure1. Scenario of a UW-ASN composed of underwater and surface vehicles does not work well in an underwater environment due to the conducting nature of the medium, especially in the case of seawater. However, if EM could be working underwater, even in a short distance, its much faster propagating speed is definitely a great advantage for faster and efficient communication among nodes. Free-space optical (FSO) waves used as wireless communication carriers are generally limited to very short distances because the severe water absorption at the optical frequency band and strong backscatter from suspending particles. Even the clearest water has 1000 times the attenuation of clear air, and turbid water has more than 100 times the attenuation of the densest fog. Nevertheless, underwater FSO, especially in the blue-green wavelengths, offers a practical choice for high-bandwidth communication (10-150 Mbps, bits per second) over moderate ranges (10-100 meters). This communication range is much needed in harbor inspection, oil-rig maintenance, and linking submarines to land, just name a few of the demands on this front. In this paper we discuss the physical fundamentals and the implications of using acoustic waves as the wireless communication carrier in underwater environments in Section II, then we discuss an Overview of Routing Protocols for Underwater Wireless Communications in Section III. Section IV we discuss about the two networking architectures of UWSNS. Section V we discuss about acoustic modem technology and will describe Link Quest Incs Cutting-Edge Acoustic Modems in detail.. Section VI gives a comparison between ground based sensors with that of a Mobile UWSNs, Section VII we throw some light on the various applications of UWC. And finally we conclude the paper in Section VIII followed by references. II. ACOUSTIC WAVES Among the three types of waves, acoustic waves are used as the primary carrier for underwater wireless communication systems due to the relatively low absorption in underwater environments. We start the discussion with the physical fundamentals and the implications of using acoustic waves as the wireless communication carrier in underwater environments. Propagation velocity: The extremely slow propagation speed of sound through water is an important factor that differentiates it from electromagnetic propagation. The speed of sound in water depends on the water properties of temperature, salinity and pressure (directly related to the depth). A typical speed of sound in water near the ocean surface is about 1520 m/s, which is more than 4 times faster than the speed of sound in air, but five orders of magnitude smaller than the speed of light. The speed of sound in water increases with increasing water temperature, increasing salinity and increasing depth. Most of the changes in sound speed in the surface ocean are due to the changes in temperature. Approximately, the sound speed increases 4.0 m/s for water temperature arising 1C. When salinity increases 1 practical salinity unit (PSU), the sound speed in water increases 1.4 m/s. As the depth of water (therefore also the pressure) increases 1 km, the sound speed increases roughly 17 m/ s. It is noteworthy to point out that the above assessments are only for rough quantitative or qualitative discussions, and the variations in sound speed for a given property are not linear in general. Fig.2. a vertical profile of sound speed in seawater as the lump-sum function of depth Absorption: The absorptive energy loss is directly controlled by the material imperfection for the type of physical wave propagating through it. For acoustic waves, this material imperfection is the inelasticity, which converts the wave energy into heat. The absorptive loss for acoustic wave propagation is frequency-dependent, and can be expressed as e ®(f)d, where d is the propagation distance and  ®(f) is the absorption coefficient at frequency f. For seawater, the absorption coefficient at frequency f in kHz can be written as the sum of chemical relaxation processes and absorption from pure water where the first term on the right side is the contribution from boric acid, the second term is from the contribution of magnesium sulphate, and the third term is from the contribution of pure water; A1, A2, and A3 are constants; the pressure dependencies are given by parameters P1, P2 and P3; and the relaxation frequencies f1 and f2 are for the relaxation process in boric acid and magnesium sulphate, respectively. Fig. 3 shows the relative contribution from the different sources of absorption as a function of frequency. Fig.3. Absorption in generic seawater Multipath: An acoustic wave can reach a certain point through multiple paths. In a shallow water environment, where the transmission distance is larger than the water depth, wave reflections from the surface and the bottom generate multiple arrivals of the same signal. The Fig 4 illustrates the adverse effects of Multipath Propagation. In deep water, it occurs due to ray Fig 4: Shallow water multipath propagation: in addition to the direct path, the signal propagates via reflections from the surface and bottom. bending, i.e. the tendency of acoustic waves to travel along the axis of lowest sound speed. The channel response varies in time, and also changes if the receiver moves. Regardless of its origin, multipath propagation creates signal echoes, resulting in intersymbol interference in a digital communication system. While in a cellular radio system multipath spans a few symbol intervals, in an underwater acoustic channel it can spans few tens, or even hundreds of symbol intervals! To avoid the intersymbol interference, a guard time, of length at least equal to the multipath spread, must be inserted between successively transmitted symbols. However, this will reduce the overall symbol rate, which is already limited by the system bandwidth. To maximize the symbol rate, a receiver must be designed to counteract very long intersymbol interference. Path Loss: Path loss that occurs in an acoustic channel over a distance d is given as A= dka (f) d, where k is the path loss exponent whose value is usually between 1 and 2, and a(f) is the absorption factor that depends on the frequency f. This dependence severely limits the available bandwidth: for example, at distances on the order of 100 km, the available bandwidth is only on the order of 1 kHz. At shorter distances, a larger bandwidth is available, but in practice it is limited by that of the transducer. Also in contrast to the radio systems, an acoustic signal is rarely narrowband, i.e., its bandwidth is not negligible with respect to the center frequency. Within this limited bandwidth, the signal is subject to multipath propagation, which is particularly pronounced on horizontal channels. III ROUTING PROTOCOLS There are several drawbacks with respect to the suitability of the existing terrestrial routing solutions for underwater wireless communications. Routing protocols can be divided into three categories, namely, proactive, reactive, and geographical. Proactive protocols provoke a large signaling overhead to establish routes for the first time and each time the network topology is modified because of mobility, node failures, or channel state changes because updated topology information must be propagated to all network devices. In this way, each device can establish a path to any other node in the network, which may not be required in underwater networks. Also, scalability is an important issue for this family of routing schemes. For these reasons, proactive protocols may not be suitable for underwater networks. Reactive protocols are more appropriate for dynamic environments but incur a higher latency and still require source-initiated flooding of control packets to establish paths. Reactive protocols may be unsuitable for underwater networks because they also cause a high latency in the establishment of paths, which is amplified underwater by the slow propagation of acoustic signals. Geographical routing protocols are very promising for their scalability feature and limited signaling requirements. However, global positioning system (GPS) radio receivers do not work properly in the underwater environment. Still, underwater sensing devices must estimate their current position, irrespective of the chosen routing approach, to associate the sampled data with their 3D position. IV ARCHITECTURE In general, depending on the permanent vs on-demand placement of the sensors, the time constraints imposed by the applications and the volume of data being retrieved, we could roughly classify the aquatic application scenarios into two broad categories: long-term non-time-critical aquatic monitoring and short-term time-critical aquatic exploration. Fig 5: An illustration of the mobile UWSN architecture for long-term non-time-critical aquatic monitoring applications Fig. 5 illustrates the mobile UWSN architecture for long-term non-time-critical aquatic monitoring applications. In this type of network, sensor nodes are densely deployed to cover a spacial continuous monitoring area. Data are collected by local sensors, related by intermediate sensors, and finally reach the surface nodes (equipped with both acoustic and RF (Radio Frequency) modems), which can transmit data to the on-shore command center by radio. Since this type of network is designed for long-term monitoring task, then energy saving is a central issue to consider in the protocol design. Moreover, depending on the data sampling frequency, we may need mechanisms to dynamically control the mode of sensors (switching between sleeping modes, wake-up mode, and working mode). In this way, we may save more energy. Further, when sensors are running out of battery, they should be able to pop up to the water surface for recharge, for which a simple air-bladder-like device would suffice. Clearly, in the mobile UWSNs for long-term aquatic monitoring, localization is a must-do task to locate mobile sensors, since usually only location-aware data is useful in aquatic monitoring. In addition, the sensor location information can be utilized to assist data forwarding since geo-routing proves to be more efficient than pure flooding. Furthermore, location can help to determine if the sensors float crossing the boundary of the interested area. Fig 6: An illustration of the mobile UWSN architecture for short-term time-critical aquatic exploration applications In Fig. 6, we show a civilian scenario of the mobile UWSN architecture for short-term time-critical aquatic exploration applications. Assume a ship wreckage accident investigation team wants to identify the target venue. When the cable is damaged the ROV is out-of-control or not recoverable. In contrast, by deploying a mobile underwater wireless sensor network, as shown in Fig. 2, the investigation team can control the ROV remotely. The self-reconfigurable underwater sensor network tolerates more faults than the existing tethered solution. After investigation, the underwater sensors can be recovered by issuing a command to trigger air-bladder devices. As limited by acoustic physics and coding technology, high data rate networking can only be realized in high-frequency acoustic band in underwater communication. It was demonstrated by empirical implementations that the link bandwidth can reach up to 0.5Mbps at the distance of 60 meters. Such high data rate is suitable to deliver even multimedia data. Compared with the first type of mobile UWSN for long-term non-time-critical aquatic monitoring, the mobile UWSN for short-term time-critical aquatic exploration presents the following differences in the protocol design. Real-time data transfer is more of concern Energy saving becomes a secondary issue. Localization is not a must-do task. However, reliable, resilient, and secure data transfer is always a desired advanced feature for both types of mobile UWSNs. V ACOUSTIC MODEM TECHNOLOGY Acoustic modem technology offers two types of modulation/detection: frequency shift keying (FSK) with non-coherent detection and phase-shift keying (PSK) with coherent detection. FSK has traditionally been used for robust acoustic communications at low bit rates (typically on the order of 100 bps). To achieve bandwidth efficiency, i.e. to transmit at a bit rate greater than the available bandwidth, the information must be encoded into the phase or the amplitude of the signal, as it is done in PSK or Quadrature Amplitude Modulation (QAM). The symbol stream modulates the carrier, and the so-obtained signal is transmitted over the channel. To detect this type of signal on a multipath-distorted acoustic channel, a receiver must employ an equalizer whose task is to unravel the intersymbol interference. A block diagram of an adaptive decision-feedback equalizer (DFE) is shown in Figure 7. In this configuration, multiple input signals, obtained Fig 7: Multichannel adaptive decision-feedback equalizer (DFE) is used for high-speed underwater acoustic communications. It supports any linear modulation format, such as M-ary PSK or M-ary QAM. from spatially diverse receiving hydrophones, can be used to enhance the system performance. The receiver parameters are optimized to minimize the mean squared error in the detected data stream. After the initial training period, during which a known symbol sequence is transmitted, the equalizer is adjusted adaptively, using the output symbol decisions. An integrated Doppler tracking algorithm enables the equalizer to operate in a mobile scenario. This receiver structure has been used on various types of acoustic channels. Current achievements include transmission at bit rates on the order of one kbps over long ranges (10-100 nautical miles) and several tens of kbps over short ranges (few km) as the highest rates reported to date. VI Mobile UWSNs and Ground- Based Sensor Networks A mobile UWSN is significantly different from any ground-based sensor network in terms of the following aspects: Communication Method: Electromagnetic waves cannot propagate over a long distance in underwater environments. Therefore, underwater sensor networks have to rely on other physical means, such as acoustic sounds, to transmit signals. Unlike wireless links among ground-based sensors, each underwater wireless link features large latency and low-bandwidth. Due to such distinct network dynamics, communication protocols used in ground-based sensor networks may not be suitable in underwater sensor networks. Specially, low-bandwidth and large-latency usually result in long end-to-end delay, which brings big challenges in reliable data transfer and traffic congestion control. The large latency also significantly affects multiple access protocols. Traditional random access approaches in RF wireless networks might not work efficiently in underwater scenarios. Node Mobility Most sensor nodes in ground-based sensor networks are typically static, though it is possible to implement interactions between these static sensor nodes and a limit amount of mobile nodes (e.g., mobile data collecting entities like mules which may or may not be sensor nodes). In contrast, the majority of underwater sensor nodes, except some fixed nodes equipped on surface-level buoys, are with low or medium mobility due to water current and other underwater activities. From empirical observations, underwater objects may move at the speed of 2-3 knots (or 3-6 kilometers per hour) in a typical underwater condition [2]. Therefore, if a network protocol proposed for ground-based sensor networks does not consider mobility for the majority of sensor nodes, it would likely fail when directly cloned for aquatic applications. Although there have been extensive research in groundbased sensor networks, due to the unique features of mobile UWSNs, new research at almost every level of the protocol suite is required. VII

Antigone and A Doll’s House feminine comparison Essay -- Gender Roles,

â€Å"The emotional, sexual, and psychological stereotyping of females begins when the doctor says, ‘It's a girl.’† (Chisholm). Where do women fit in the social order of society today? Many women today fit in the same role as they would have been expected to long ago. Though generally speaking, women have a lot more options today. The male hierarchy still governs most aspects of society, but with many more limitations because women are discovering that they can stand on their own, and have no need for constant regulating from their male counterparts. Patriarchal influences are the base of society. In Antigone Sophocles tells a tale about Greek values and women’s status. Antigone has just witnessed her two brothers kill each other; one brother died defending Thebes and the other died betraying it. Creon’s law keeps anyone from burying the traitor and Antigone is set on contravening this. Conversely, Ibsen’s playwright, A Doll’s House, i s a story about an intelligent woman, Nora, who is misunderstood by her husband, Torvald. She takes desperate measures to keep her family intact but in the end wines up going out on her own. As the stories progress the both Nora’s and Antigone’s characteristics become very similar in that they are both rebellious, are subservient to male jurisdiction, and are resolute and strong-willed in their decision. Firstly, Antigone and Nora are both mildly chaotic and rebellious. Women, at those times have certain expectations of following patriarchal jurisdiction. When Antigone goes against Creons law she shows her rebellious side. Nora’s seditiousness is demonstrated when Mrs.Linden converses with her and says, â€Å"Why, a wife can’t borrow money without her husband’s consent†(Ibsen 151). During Ibsen’s era many... ...igone is expected to be subservient, but has an urge to defy. Ismene tries to remind her about women’s place in society, but fails in trying to persuade Antigone. Nora all her life has been the little helpless lark that cannot think for herself. Torvald sees her as what a woman was expected to be, and she is powerless. The resoluteness and the strong-will of both Antigone and Nora are vital. Without these qualities they would not of gotten far in their campaigns. Antigone has the constant but shaky decisiveness throughout the story. While, Nora has an unclear decisiveness until the very end when she completely decides to move out and is unwavering to Torvald’s pleas. These characters tell the story of women in the past that have paved the way for women today. Without them who knows how women would have been treated today? Surely, their actions were greatly needed.

Tuesday, September 3, 2019

Edgar Allan Poe :: essays research papers

Absence of Beauty   Ã‚  Ã‚  Ã‚  Ã‚  Edgar Allan Poe sees evil as a living threat to man because he lives in its presence. Parallel with the tragedies in his own life relating to the deaths of his young mother, wife and others he loved in his life. It is no wonder that he sees the absence of beauty as evil, because he felt the terror and tragedy of the loss of his own life. In his stories he illustrates how the absence of beauty is the essence of evil. In “The Tell Tale Heart'; when the old man’s eyes is closed he would not be killed because his eye is not considered ugly. That is why each night the man goes into his room to see if the eye is open. “… but I found the eye always closed; and so it was impossible to do the work; for it was not the old man who vexed me, but his Evil Eye.';(139) The eye when open represents the ugliness of the old man. When that ugliness is present, beauty is gone and evil is present. The ugliness of the old man’s open eye is the cause for his killer to kill him because evil is present and beauty is no where to be found.   Ã‚  Ã‚  Ã‚  Ã‚  In “The Fall of the House of Usher'; Madeline is beautiful once she gets sick her brother, Roderick, gets sick and everything seems to fall apart. Madeline’s beauty had kept the evil down and covered up. As Madeline gets sicker and sicker it gets worse and worse. Finally when Madeline dies beauty no longer exists Roderick goes crazy and everything is destroyed because beauty was not there to cover up all the evil that they possessed. The absence of beauty caused all evil to break loose. The house collapses and Roderick is destroyed.   Ã‚  Ã‚  Ã‚  Ã‚  In “The Black Cat'; the cat to him was beautiful and precious. “This latter was a remarkably large and beautiful animal, entirely black, and sagacious to an astonishing degree.';(12) Beauty is what one person sees through his own eyes. “The cat followed me down the steep stairs, and nearly throwing me headlong, exasperated me to madness.';(18) Once he saw that the cat was no longer beautiful it causes him to murder his wife because all his evil was hidden and once that beauty that he saw died and became none existing everything he was hiding especially his evil side came out caused him to kill.

Monday, September 2, 2019

The Process of Successful Change

The Process of Successful Change Norma Taylor HCS 325 July 10, 2012 The Process of Successful Change There are many responsibilities involved with the title of manager. Implementing and rolling out change to your employees can be overwhelming. There are different techniques used to ensure a smooth, uneventful transition to change. Some techniques are not as useful and successful as others, depending on what type of change is involved. Motivational techniques to implement change in a company are not an easy task, but it is possible.Expectancy theory, two-factor theory, goal-setting theory, and equity theory are a few different techniques that I would use in my company. The expectancy theory is a unique way to motivate employees during a time of change. Victor Vroom’s expectancy theory suggests that â€Å"people will do what they can do when they want to do it† (Lombardi & Schermerhorn, 2007). This theory depends on three different factors: Expectancy, Instrumentality, an d Valence. Expectancy is the belief that working hard will result in a desired level of task achieved.Instrumentality is defined as a person’s belief that successful performance will be rewarded and has other good outcomes. Valance is the value a person assigns to the possible rewards and other work related outcomes. There are pros and cons to the expectancy theory. One pro is that this theory is a commonly recognized for supporting an employee’s decision-making method. A shortcoming of this theory is that it has numerous elements that may make this theory not as successful. For example, this theory does not take the emotional state of the individual into consideration.The individual's personality, abilities, skills, knowledge as well as previous experiences are factors that may affect the outcome of this model. The expectancy theory of motivation is a â€Å"perception† based model. The manager needs to guess the motivational force (the value) of a reward for an employee. The theory can be difficult to implement in the group environment (Leadership-Central. com, 2012). As a leader using the expectancy theory, I would set realistic goals for the employees. In addition, I would also ensure that they are setting realistic goals for themselves.Failure to set a realistic goal will result in a low motivation as the expectancy will yield a low result. Rewards are a form of motivation to everyone and I would set realistic rewards. As a leader I need to understand what my employee’s value and I would link the reward with the goal. The trick here is to ensure that you operate within your constraints as well as make sure not to exaggerate the reward in comparison to the effort they will need to express. High reward with low effort will create an expectation effect and may work against you.I believe the expectancy theory technique would work well in a small office. Implementing change and offering a reward to committed employees with positive r esults will give effective outcomes. The two-factor theory is another motivational techniques used in the workplace developed by Frederick Herzberg. This theory states that there are certain factors in the workplace that cause job satisfaction as well as a separate set of factors that cause dissatisfaction. This theory used as a motivational technique can cause great outcomes in the workplace. Job satisfaction can be achieved in the simplest ways.Acknowledging great performance would give employees a sense of job satisfaction at their workplace, thus creating a positive outcome. According to Herzberg job satisfaction can be a sense of achievement, feelings of recognition, sense of responsibility, opportunity for advancement and feelings of personal growth (Lombardi ; Schermerhorn, 2007). Job gratification can indicate a great degree of incentive or productivity with workers. J. Stacy Adams developed the equity theory, which assists in the explanation that wages and environments do n ot conclude motivation to employees.His theory indicates that the perceived unfairness is a motivating state. When people believe they have been inequitably treated in comparison to others, they will try to eradicate the discomfort and reestablish a sense of fairness to the situation (Lombardi ; Schermerhorn, 2007). As a leader this type of motivation is essential to a work environment. According to Adams’ prediction, he believes that employees would deal with unfairness by changing their work contributions and decreasing their labor.He also believes that employees will ask for incentives, or simply terminate their position in the company because of unfair or unjust treatment compared to fellow employees. Treating everyone equally and fair is a practice required in any type of work environment. Using this tool as a motivation to implement change would be necessary. It would aid in the impartiality of rewards for doing an excellent job during the change as well as the repriman ds needed for employees not embracing the change as necessary.In 1960’s, Edwin Locke put forward the goal-setting theory of motivation. This theory states that goal setting is essentially linked to task performance. The theory states that specific and challenging goals along with appropriate feedback contribute to higher and better task performance. In simple words, goals indicate, and give direction to an employee about what needs to be done and how much effort is required. This is one of my favorite theories because I believe that it is the most effective theory to use when implementing a change in a work setting.There are numerous important features in this theory. For example, Edwin Locke states that the employee’s willingness to work toward the attainment of a goal is a main source of job motivation. A clear, difficult, and specific goals are greater motivating factors than having easy, general and vague goals. Specific and clear goals lead to greater output and b etter performance (Management Study Guide, 2012). Goals ought to be reasonable and challenging to give employees a sense of gratification and accomplishment when attained.The more challenging the goal, the greater the reward, and the higher the employee’s desire are for achieving it. Feedback is a means of gaining reputation, making clarifications and regulating goal. There are many theories to implement change and motivate employees; some may work while others may not. Change in a workplace is a process. As a leader I would start with what would benefit the company. Once the notice of change has been communicated, setting up training would be the next step.Using the goal-setting theory I would let the employees know clearly what is expected from them and continue to implement the change. To motivate the workers and make the change a little more pleasant, I would reward them once the goal is achieved. Change is not always bad, but it is definitely a challenging task, because of the various needs and desires of each individual. References In-Tuition. (2012). Follow These Strategies for Managing Change. Retrieved from http://www. practical-management-skills. com/strategies-for-mamaging-change. htm

Sunday, September 1, 2019

Cultural Behavior Essay

Cultural behavior is behavior exhibited by humans (and, some would argue, by other species as well, though to a much lesser degree) that is extra-somatic or extra-genetic, in other words, learned. For a behavior to be considered cultural it must be shared extra-genetically; that is, it must be taught. Language is an important element in human culture. It is the primary abstract artifact by which culture is transmitted extra-geneticallyCultural programming is an integral part of the overall school programming. The school has several initiatives that provide for cultural experiences. Culture is a collective programming of the mind that distinguishes the members of one group or category of people from another. The position that the ideas, meanings, beliefs and values people learn as members of society determines human nature. People are what they learn. Optimistic version of cultural determinism places no limits on the abilities of human beings to do or to be whatever they want. Some anthropologists suggest that there is no universal â€Å"right way† of being human. Right way† is almost always â€Å"our way†; that â€Å"our way† in one society almost never corresponds to â€Å"our way† in any other society. Proper attitude of an informed human being could only be that of tolerance. The optimistic version of this theory postulates that human nature being infinitely malleable, human being can choose the ways of life they prefer. The pessimistic version maintains that people are what they are conditioned to be; this is something over which they have no control. Human beings are passive creatures and do whatever their culture tells them to do. This explanation leads to behaviorism that locates the causes of human behavior in a realm that is totally beyond human control. It does not imply normalcy for oneself, nor for one’s society. It, however, calls for judgment when dealing with groups or societies different from one’s own cultural differences manifest themselves in different ways and differing levels of depth. Symbols represent the most superficial manifestations of culture, with heroes and rituals in between. Symbols are words, gestures, pictures, or objects that carry a particular meaning which is only recognized by those who share a particular culture. New symbols easily develop, old ones disappear. Symbols from one particular group are regularly copied by others. This is why symbols represent the outermost layer of a culture. Heroes are persons, past or present, real or fictitious, who possess characteristics that are highly prized in a culture. They also serve as models for behavior. Rituals are collective activities, sometimes superfluous in reaching desired objectives, but are considered as socially essential. They are therefore carried out most of the times for their own sake (ways of greetings, paying respect to others, religious and social ceremonies, etc. ). The core of a culture is formed by values. They are broad tendencies for preferences of certain state of affairs to others (good-evil, right-wrong, natural-unnatural). Many values remain unconscious to those who hold them. Therefore they often cannot be discussed, nor can they be directly observed by others. Values can only be inferred from the way people act under different circumstances. Symbols, heroes, and rituals are the tangible or visual aspects of the practices of a culture. The true cultural meaning of the practices is intangible; this is revealed only when the practices are interpreted by the insiders. Sources of cultural programming are family, friends, peers, schools, media, acquaintances, places of work, places of entertainment†¦ in that order of importance. Indeed, pretty much everyone we meet or interact with in any way.