Top Banner
IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905 International Journal of Research in Engineering & Applied Sciences 1 http://www.euroasiapub.org HEALTH, SAFETY, ENVIRONMENT AND CLIMATE Dr. Bhuvnender Chaudhary* Saurabh Tripathi** Nidhi Singh*** ABSTRACT Sustainable development in energy sector means progress in society, keeping in mind, survival of all by containing the negative retrospective effect on life, health, sustainability it all depend on our life style, habit, basic needs and desires. Do we contain our needs and think about wastage of resources and management of resources with efficient upgraded modern technology. The answer is if we look at the aspiration of everybody to conquer s Mars and Moon and a mass wealth .do they think for living simple healthy life of safety in clean environment, pure climate and atmosphere. On the other side, are we fallows environmental laws, conventions and guide lines or all these remain in books, papers and up to discussion or seminar only. If we are really sincere about sustainability, health, safety, environment and climate then we have to go beyond it, means contain our needs and desire, educate masses specially poor and down trodden, contract the expansion and growth of population, but practically in real term we think for material growth, prosperity, comforts, enjoyment and quench our greed’s of amassing wealth, rather living simple life. Which is against the principle of sustainability, that is why we see and experience earthquake, tsunami, flood, eruption of volcanoes because we keep on disturbing the balance of earth, when earth balance its credit with debit then we think for sustainability but soon we forget and keep on disbalancing again, earth repeat the same process again but how long . This equation of balance and disbalance will continue. Keywords: Health, Safety, security, Environment. *Dean, Department of Management Studies, Phonics Group of Institutions, Roorkee, Uttarakhand **Asst. Professor, Department of Management Studies, Phonics Group of Institutions, Roorkee, Uttarakhand ***Asst. Professor, Department of Management Studies, Dev Bhoomi Institute of Technology , Dehradun
83
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 1 http://www.euroasiapub.org

HEALTH, SAFETY, ENVIRONMENT AND CLIMATE Dr. Bhuvnender Chaudhary*

Saurabh Tripathi**

Nidhi Singh***

ABSTRACT

Sustainable development in energy sector means progress in society, keeping in mind,

survival of all by containing the negative retrospective effect on life, health, sustainability it

all depend on our life style, habit, basic needs and desires.

Do we contain our needs and think about wastage of resources and management of resources

with efficient upgraded modern technology. The answer is if we look at the aspiration of

everybody to conquer s Mars and Moon and a mass wealth .do they think for living simple

healthy life of safety in clean environment, pure climate and atmosphere. On the other side,

are we fallows environmental laws, conventions and guide lines or all these remain in books,

papers and up to discussion or seminar only. If we are really sincere about sustainability,

health, safety, environment and climate then we have to go beyond it, means contain our

needs and desire, educate masses specially poor and down trodden, contract the expansion

and growth of population, but practically in real term we think for material growth,

prosperity, comforts, enjoyment and quench our greed’s of amassing wealth, rather living

simple life. Which is against the principle of sustainability, that is why we see and experience

earthquake, tsunami, flood, eruption of volcanoes because we keep on disturbing the balance

of earth, when earth balance its credit with debit then we think for sustainability but soon we

forget and keep on disbalancing again, earth repeat the same process again but how long .

This equation of balance and disbalance will continue.

Keywords: Health, Safety, security, Environment.

*Dean, Department of Management Studies, Phonics Group of Institutions, Roorkee, Uttarakhand

**Asst. Professor, Department of Management Studies, Phonics Group of Institutions, Roorkee,

Uttarakhand

***Asst. Professor, Department of Management Studies, Dev Bhoomi Institute of

Technology , Dehradun

Page 2: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 2 http://www.euroasiapub.org

INTRODUCTION

Development and environment work against one another. The development includes

enhancement of knowledge, creation of physical assets and infrastructures through rapid

industrializations all have direct bearing effect on the surrounding environment and climates

which affects our lives. The one who take the benefits of the developments and one who

unable to take benefits both are affected through this. The climate is changing very fastly if

we study the past track of hundred years the average temperature is arisen by 2 degree

Celsius, the many area on earth are drought prone and affected by flood as well as tsunami.

All this is result of rapid industrialization and developments which is emitting tons of

Carbons and Green house gases. Per-capita emissions are a country's total emissions divided

by its population. Per-capita emissions in the industrialized countries are typically as much as

ten times the average in developing countries. This is one reason industrialized countries

accepted responsibility for leading climate change efforts in the Kyoto negotiations. In

Kyoto, the countries that took on quantified commitments for the first period (2008–12)

corresponded roughly to those with per-capita emissions in 1990 of two tonnes of carbon or

higher. In 2005, the top-20 emitters comprised 80% of total GHG emissions (PBL, 2010. See

also the notes in the following section on the top-ten emitters in 2005). Countries with a

Kyoto target made up 20% of total GHG emissions.

Another way of measuring GHG emissions is to measure the total emissions that have

accumulated in the atmosphere over time (IEA, 2007,) over a long time period; cumulative

emissions provide an indication of a country's total contribution to GHG concentrations in the

atmosphere. Over the 1900-2005 periods, the US was the world's largest cumulative emitter

of energy-related CO2 emissions, and accounted for 30% of total cumulative emissions (IEA,

2007,). The second largest emitter was the EU, at 23%; the third largest was China, at 8%;

fourth was Japan, at 4%; fifth was India, at 2%. The rest of the world accounted for 33% of

global, cumulative, energy-related CO2 emissions.

TOP-TEN EMITTERS What follows is a ranking of the world's top ten emitters of GHGs for 2005 (MNP, 2007).

The first figure is the country's or region's emissions as a percentage of the global total. The

second figure is the country's/region's per-capita emissions, in units of tons of GHG per-

capita:

Page 3: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 3 http://www.euroasiapub.org

S.No Countries Name Region's emissions as

a percentage

(1st figure)

Region's per-capita

emissions(units in Tons)

(2nd Figure)

1 China 17% 5.8

2 United States 16% 24.1

3 European Union 11% 10.6

4 Indonesia 6% 12.9

5 India 5% 2.1

6 Russia 5% 14.9

7 Brazil 4% 10.0

8 Japan 3% 10.6

9 Canada 2% 2.1

10 Maxico 2% 6.4

• These values are for the GHG emissions from fossil fuel use and cement production.

Calculations are for carbon dioxide (CO2), methane (CH4), nitrous oxide (N2O) and

gases containing fluorine (the F-gases HFCs, PFCs and SF6).

• These estimates are subject to large uncertainties regarding CO2 emissions from

deforestation; and the per country emissions of other GHGs (e.g., methane). There are

also other large uncertainties which mean that small differences between countries are

not significant. CO2 emissions from the decay of remaining biomass after biomass

burning/deforestation are not included.

• Excluding underground fires.

• Including an estimate of 2000 million tonnes CO2 from peat fires and decomposition of

peat soils after draining. However, the uncertainty range is very large.

• Industrialised countries: official country data reported to UNFCCC

Apart from this many hazards chemicals are used which is effecting the life of common man

aggressively no doubt the member countries of Kyoto Protocol agreements are taking pains to

reduce the level of emission of carbon in fact the carbon emission has been legalised, Carbon

trading permissible, the ground realties of reducing Carbon is not materialised the level

carbon emission is increasing day by day due to the stress of development and rising

population of the world. Till date there are so many conventions and agreements have been

taken place throughout the world and these are in following orders:

Page 4: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 4 http://www.euroasiapub.org

• Aarhus Convention on Access to Information, Public Participation in Decision-

making and Access to Justice in Environmental Matters, Aarhus, 1998

• Alpine Convention together with its nine protocols

• ASEAN Agreement on Transboundary Haze Pollution

• Convention for the Conservation of Antarctic Marine Living Resources (CCAMLR),

Canberra, 1980.

o Agreed Measures for the Conservation of Antarctic Fauna and Flora

o Convention for the Conservation of Antarctic Seals

o Convention for the Conservation of Antarctic Marine Living Resources

o Protocol on Environmental Protection to the Antarctic Treaty

• Anti-Ballistic Missile Treaty (ABM Treaty) (ABMT)

• Asia-Pacific Partnership on Clean Development and Climate

• Barcelona Convention for the Protection and Development of the Marine

Environment and Coastal Region of the Mediterranean Sea

• Basel Convention on the Control of Transboundary Movements of Hazardous Wastes

and their Disposal, Basel, 1989.

• Biological Weapons Convention (Convention on the Prohibition of the Development,

Production and Stockpiling of Bacteriological [Biological] and Toxin Weapons and

on their Destruction) (BWC)

• Bonn Agreement (environment)

• Carpathian Convention Framework Convention on the Protection and Sustainable

Development of the Carpathians

• Cartagena Protocol on Bio safety

• Chemical Weapons Convention

• China Australia Migratory Bird Agreement

• CITES Convention on the International Trade in Endangered Species of Wild Flora

and Fauna

• Climate Change Agreement

• Comprehensive Test Ban Treaty (CTBT)

• Convention for the Conservation of Antarctic Seals

• Convention for Co-operation in the Protection and Development of the Marine and

Coastal Environment of the West and Central African Region, Abidjan, 1981.

• Convention for the Protection and Development of the Marine Environment and

Coastal Region of the Mediterranean Sea Barcelona Convention, Barcelona, 1976.

Page 5: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 5 http://www.euroasiapub.org

• Convention for the Protection and Development of the Marine Environment of the

Wider Caribbean Region, Cartagena de India’s, 1983.

• Convention for the Protection of the Marine Environment and Coastal Area of the

South-east Pacific, Lima, 1981.

• Convention for the Protection of the Marine Environment of the North-east Atlantic

OSPAR Convention, Paris, 1992.

• Convention for the Protection of the Natural Resources and Environment of the South

Pacific Region, Nouméa, 1986.

• Convention of the Protection, Management and Development of the Marine and

Coastal Environment of the Eastern African Region, Nairobi, 1985.

• Convention on Access to Information, Public Participation in Decision-making and

Access to Justice in Environmental Matters Aarhus Convention, Aarhus, 1998

• Convention on Assistance in the Case of a Nuclear Accident or Radiological

Emergency (Assistance Convention), Vienna, 1986.

• Convention on Biological Diversity (CBD), Nairobi, 1992.

• Convention on Certain Conventional Weapons

• Convention on Civil Liability for Damage Caused during Carriage of Dangerous

Goods by Road, Rail, and Inland Navigation Vessels (CRTD), Geneva, 1989.

• Convention on Cluster Munitions

• Convention on Early Notification of a Nuclear Accident (Notification Convention),

Vienna, 1986.

• Convention on Fishing and Conservation of Living Resources of the High Seas

• Convention on Long-Range Trans boundary Air Pollution

• Convention for the Protection of the Marine Environment of the North-east Atlantic

OSPAR Convention, Paris, 1992.

• Convention on Nuclear Safety, Vienna, 1994.

• Vienna Convention on Civil Liability for Nuclear Damage, Vienna, 1963.

• Convention on the Conservation of European Wildlife and Natural Habitats

• Convention on the Conservation of Migratory Species of Wild Animals, (CMS),

Bonn, 1979.

• Convention on the International Trade in Endangered Species of Wild Flora and

Fauna, (CITES), Washington DC, 1973.

• Convention on the Prevention of Marine Pollution by Dumping Wastes and Other

Matter

Page 6: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 6 http://www.euroasiapub.org

• Convention on the Prohibition of Military or Any Other Hostile Use of Environmental

Modification Techniques

• Convention on the Protection and Use of Transboundary Watercourses and

International Lakes (ECE Watr Convention), Helsinki, 1992.

• Convention on the Transboundary Effects of Industrial Accidents, Helsinki, 1992.

• Convention on Wetlands of International Importance Especially As Waterfowl

Habitat

• Convention to Combat Desertification (CCD), Paris, 1994.

• Convention on the Protection of the Black Sea against Pollution, Bucharest, 1992.

• Convention on the Protection of the Marine Environment of the Baltic Sea Area 1992

Helsinki Convention, Helsinki, 1992.

• Conventions within the UNEP Regional Seas Programme

• Convention on the ban of the Import into Africa and the Control of Transboundary

Movements and Management of Hazardous Wastes within Africa, Bamako, 1991.

o EMEP Protocol

o Nitrogen Oxide Protocol

o Volatile Organic Compounds Protocol

o Sulphur Emissions Reduction Protocols 1985 and 1994

o Heavy Metals Protocol

o POP Air Pollution Protocol

o Multi-effect Protocol (Gothenburg protocol) [5]

• Directive on the legal protection of biotechnological inventions

• Energy Community (Energy Community South East Europe Treaty) (ECSEE)

• Espoo Convention on Environmental Impact Assessment in a Transboundary Context,

Espoo, 1991.

• European Agreement Concerning the International Carriage of Dangerous Goods by

Inland Waterways (AND), Geneva, 2000.

• European Agreement concerning the International Carriage of Dangerous Goods by

Road (ADR), Geneva, 1957.

• FAO International Code of Conduct on the distribution and use of Pesticides, Rome,

1985.

• FAO International Undertaking on Plant Genetic Resources, Rome, 1983.

• Framework Convention on Climate Change (UNFCCC), New York, 1992.

Page 7: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 7 http://www.euroasiapub.org

• Geneva Protocol (Protocol for the Prohibition of the Use in War of Asphyxiating,

Poisonous or other Gases, and of Bacteriological Methods of Warfare)

• International Convention for the Prevention of Pollution from Ships

• International Convention for the Conservation of Atlantic Tunas (ICCAT), Rio de

Janeiro, 1966.

• International Convention for the Regulation of Whaling (ICRW), Washington, 1946.

• International Treaty on Plant Genetic Resources for Food and Agriculture

• International Tropical Timber Agreement, 1983 (expired)

• International Tropical Timber Agreement, (ITTA), Geneva, 1994.

• Kuwait Regional Convention for Co-operation on the Protection of the Marine

Environment from Pollution, Kuwait, 1978.

• Regional Convention for the Conservation of the Red Sea and the Gulf of Aden

Environment, Jeddah, 1982.

• Kyoto Protocol - greenhouse gas emission reductions

• Migratory Bird Treaty Act of 1918

• Montreal Protocol on Substances That Deplete the Ozone Layer, Montreal, 1989.

• North American Agreement on Environmental Cooperation

• Protocol on Environmental Protection to the Antarctic Treaty

• Putrajaya Declaration of Regional Cooperation for the Sustainable Development of

the Seas of East Asia, Malaysia, 2003.

• Ramsar Convention Convention on Wetlands of International Importance, especially

as Waterfowl Habitat, Ramsar, 1971.

• Rotterdam Convention on the Prior Informed Consent Procedure for Certain

Hazardous Chemicals and Pesticides in International Trade, Rotterdam, 1998.

• Stockholm Convention Stockholm Convention on Persistent Organic Pollutants

Stockholm, 2001.

• Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space, and Under

Water

• Comprehensive Test Ban Treaty 1996

• United Nations Convention on the Law of the Sea

• United Nations Convention to Combat Desertification

• United Nations Framework Convention on Climate Change

• Vienna Convention for the Protection of the Ozone Layer, Vienna, 1985, including

the Montreal Protocol on Substances that Deplete the Ozone Layer, Montreal 1987.

Page 8: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 8 http://www.euroasiapub.org

• Vienna Convention on Civil Liability for Nuclear Damage, Vienna, 1963.

• Waigani Convention to Ban the Importation into Forum Island Countries of

Hazardous and Radioactive Wastes and to Control the Transboundary Movement and

Management of Hazardous Wastes within the South Pacific Region, Waigani, 1995.

• Western Regional Climate Action Initiative

Fact is this that these convections and meet remains up to the ceremonial levels and violation

and emission of rules continue unstoppably the developed country agree to pay billion of $

and supply of technology to other country for climate related study and project as par the

commitment of UNFCCC (United nation Forum work Convections on climate change) and

industrialized country have to contain the emission of gases and carbon but these countries

are fail to comply up to the expected stranded then start negotiation of reduction in

convention after convention the most recent was Berlin G 77 meet there after in coupon

Hagen in IPCC meet (inter governmental penal on climate change ).

Now the rules are flexible and compliance is monitor with commitments and penalties for

non compliance is executed this is beginning but not satisfactory there are lot many things to

do and miles to go head.

EFFECT OF CARBON EMISSION Rising temperature and climate change as seen in many part of world in shape of global

warming due to green house effect extreme cold condition, cloud bursting, reduction of

forestry cover, expansion of desertification and tsunami.

ENVIRONMENT AND DEVELOPMENT Physical environment is the part of a big natural process system that consists of various

subsystems as atmosphere, hydrosphere, lithosphere and biosphere that are uniquely

interactive in nature. These various subsystems are closely inter-linked through their own

natural processes. According to Odum (1971) and Trunk et al. (1978) Dynamic equilibrium

can be achieved naturally, but the time frame of the whole process depends on the magnitude

of the disturbance. Dynamic equilibrium in nature cannot be achieved if the change is too big

to handle.

Page 9: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 9 http://www.euroasiapub.org

Figure1: Components within a physical environment

ENVIRONMENTAL DEGRADATION Development will always cause changes to the physical environment. Under natural condition

changes can be absorbed by the physical environment through interactions of the various

components to attain a dynamic equilibrium state. Actually the physical environment is

capable of absorbing impact as long as it does not exceed its optimum level. If the optimum is

exceeded the physical equilibrium will start to deteriorate. Interaction between each of

physical subsystems is important to human beings as they are part of physical environment.

Each and every component of physical environment is capable of fulfilling various human

needs.

Figure 2: Environmental degradation resulting from interaction between human use system

and natural process system

Page 10: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 10 http://www.euroasiapub.org

EFFECTS OF ENVIRONMENTAL DEGRADATION One of the greatest challenges facing humanity is environmental degradation, including

deforestation, desertification, pollution, and climate change – an issue of increasing concern

for the international community. Environmental degradation increases the vulnerability of the

societies it affects and contributes to the scarcity of resources.

Climate change will lead to an increase in the intensity and frequency of weather extremes,

such as heat waves, floods, droughts and tropical cyclones. The people hardest hit by climate

change and environmental degradation are those living in the most vulnerable areas,

including coastal communities, small island nations, Sub-Saharan Africa and Asian delta

regions. It is the poorest of the poor, who lack the resources to prepare, adapt and rebuild,

that are most affected.

Environmental degradation can lead to a scarcity of resources, such as water and farmable.

Extreme weather events, such as severe flooding, increase the spread of waterborne diseases,

such as malaria and diarrhoea.

The effects of the major environmental problems on both health and

productivity are: a. Water pollution and water scarcity: As per the estimation of UN, more than two million

deaths and billions of illnesses a year are attributable to water pollution. Water scarcity

compounds these health problems. Productivity is affected by the costs of providing safe

water, by constraints on economic activity caused by water shortages, and by the adverse

effects of water pollution and shortages on other environmental resources such as, declining

fisheries and acquifer depletion leading to irreversible compaction.

b. Air pollution: As per the estimation of UN, urban air pollution is responsible for

300,000—700,000 deaths annually and creates chronic health problems for many more

people. Restrictions on vehicles and industrial activity during critical periods affect

productivity, as does the effect of acid rain on forests and water bodies.

c. Solid and hazardous wastes: Diseases are spread by uncollected garbage and blocked

drains; the health risks from hazardous wastes are typically more localized, but often acute.

Wastes affect productivity through the pollution of groundwater resources.

d. Soil degradation: Depleted soils increase the risks of malnutrition for farmers.

Productivity losses on tropical soils are estimated to be in the range of 0.5-1.5 per cent of

GNP, while secondary productivity losses are due to siltation of reservoirs, transportation

channels and other hydrologic investments.

Page 11: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 11 http://www.euroasiapub.org

e. Deforestation: Death and disease can result from the localized flooding caused by

deforestation. Loss of sustainable logging potential and of erosion prevention, watershed

stability and carbon sequestration provided by forests are among the productivity impacts of

deforestation.

f. Loss of biodiversity: The extinction of plant and animal species will potentially affect the

development of new drugs; it will reduce ecosystem adaptability and lead to the loss of

genetic resources.

g. Atmospheric changes: Ozone depletion is responsible for perhaps 300,000 additional

cases of skin cancer a year and 1.7 million cases of cataracts. Global warming may lead to

increase in the risk of climatic natural disasters. Productivity impacts may include sea-rise

damage to coastal investments, regional changes in agricultural productivity and disruption of

the marine food chain.

ENVIRONMENTAL MANAGEMENT Management of the environment involves the application of acquired knowledge about the

environment with the aims of reducing, conserving or preventing further degradation.

Management of the environment has to take into consideration detail measurements and

observations about the environment through space and time and the social institutions

involved in managing the environment.

An example of a multi-disciplinary framework in environmental management. In the figure,

environmental management is surrounded by problems from every component of the natural

process system as depicted by conservation of habitat and species diversity (biosphere), air

pollution (atmosphere), water pollution (hydrosphere), and land pollution (lithosphere). These

problems are part of the environment that requires management and can only be precisely

identified through environmental science, which is important in order to have an in-depth

knowledge of the physical environment components. However, environmental science alone

is inadequate in a management system since environmental management requires knowledge

on culture, socio-economy and its impsacts. Furthermore, there must also be environmental

ethics to control human actions, concrete relationship between the federal and state

authorities, the support of non-governmental organizations, the private sector and the general

public.

Page 12: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 12 http://www.euroasiapub.org

Figure 3: Environmental management framework

HOW STAINABLE DEVELOPMENT 1. Execution of environmental Laws and convention: Merely framing laws and

organizing meets and convention are not sufficient. One has to think beyond these

frame work.

2. Social development : Merely developing lofty tower and high structure is not

sufficient unless the lower strata of society is provided alternative means of lively

hood for sustenance

3. Education and awareness: the present level of education and mass awareness is not

sufficient to cope up with the required sustainable development all efforts goes into

vein.

4. Contain needs and desire: the environment pollution and hazardous affects in the

society is arising due to raising need and desires of common man and lust amass more

and more without taking it to consideration the side effects on the quality of life in

the society, not only on those who desire more but on those to who contain needs also

so the masses should be educated to cut their needs keeping in mind the good and bad

effects of use.

5. Containing populations: If we want to live in pollution free environment then we

have to control the growing population of those who are just burden and not

contributing towards the quality of life and polluting the environment without any

check as the population grow the consequential need of daily requirement will also

grow which has a direct impact on the bearing of environment.

FEDERAL –STATE RELATION

DEVELOPMET MASTER PLAN

MASS MEDIA

PRIVATE SECTOR

POLICY & LEGISLATION

R&D

EDUCATION & ETHICS

NGOS GENERAL PUBLIC

PLANNING

ENVIRONMENTAL MASTER PLAN

Page 13: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 13 http://www.euroasiapub.org

6. Limit the greed for amassing wealth: If all the wealth is extracted in a day and

leaving nothing for future because the capacity of wealth in the earth is limited. The

every extraction from the earth will disbalance the equilibrium of earth which

generates environment disorder like drought, tsunami, flood in a continuous

phenomenon .that is why we must extract that much only which is essential for

sustenance.

7. Simple living and sober life style: The environment can be protected only if we live

in a simple and sober life style without artificial show-off in functions, ceremony’s,

festivals and marriages as all these occasions are full of different types of pollutions

which are very common now a days. As there is no proper check and regulation over

it as well as the maximum violation is created by the elite class who carelessly

involved in celebrations of special occasions more over.

8. Promotion of Green and environment friendly technology: No doubt we in India

have a law for pollution control and to check the use of polluting technology but even

then we are quit use to of using substandard, obsolete, outdated technology without

carrying its side effects on the quality of life over this earth.

9. Reward for the promoter of environment friendly efforts: We must reward to

those and keep on rewarding the good work carried for the promotion of Eco friendly

system as a effort to upgrade the quality of lives over the earth.

10. Funding of environment friendly projects: all those proposals which have positive

effect on the life of human beings and the quality of life on the earth needs to be

supported and financed as a gesture of goodwill and positivity for taking society

towards survival and growth under the sustainable development.

GO GREEN FOR LIFE OF HEALTH AND SUSTAINABILITY We’ve identified six major forces—what we call the six Cs—that are pushing clean tech into

the mainstream and driving the rapid growth, expansion, and economic necessity of clean

tech across the globe: climate, costs, capital, competition, China, and consumers.

Costs: Perhaps the most powerful force driving today’s clean-tech growth is simple

economics. As a medium to long term trend, clean-energy costs are falling as the costs of

fossil fuel energy, despite the drop in the price of oil in the second half of 2008, are going up.

The future of clean tech is going to be, in many ways, about scaling up manufacturing and

driving down costs. Recent advances in core technology and manufacturing processes have

Page 14: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 14 http://www.euroasiapub.org

significantly improved performance, reliability, scalability, and cost of clean energy sources,

primarily solar and wind.

By contrast, in conventional fossil-fuel power such as coal and natural gas (which together

provide approximately 60% of the world’s electricity), the generating technologies are

mature, stable, and already widely deployed—so their technology costs are relatively steady

and predictable. What determines the price of conventional power is the cost of fuel—and the

price of fossil fuels, while certainly experiencing directional gyrations as we’ve seen in the

past year, has nearly always moved in the same general direction over the long term: up.

With solar, wind, small-scale hydroelectric, geothermal, and even the nascent technology of

ocean tide and wave generated electricity; the price-determining formula is just the opposite.

There is no cost of “fuel”—the sun, the breeze, the heat of the earth, the tides and waves

arrive free of charge daily.

Climate: Alarm is growing about the climate-change consequences caused by our continued

dependence on carbon-intensive, greenhouse gas (GHG)–emitting energy and transportation

sources, and manufacturing processes. The United Nations’ Intergovernmental Panel on

Climate Change warned in 2007 that global GHG emissions must be in decline by 2015 to

avert disastrous “runaway” climate change. And with insurance giants such as Swiss Re and

Munich Re thinking twice about climate impact on the issuance of their policies (try getting

an insurance policy for an oil rig in the Gulf of Mexico), the climate issue is coming front and

centre for companies, governments, and individuals.

This is driving clean-tech investment and deployment and becoming an increasingly

important factor in assessing investment risk factors. Global companies from DuPont to Wal-

Mart are investing heavily to promote energy efficiency and clean tech in their operations to

reduce their GHG contributions. “As an investor, do you believe that we’re going to take

climate change seriously in terms of legislation?” asks Mark Trexler, president of Trexler

Climate + Energy Services, a firm in Portland, Oregon, that advises companies and utilities

on carbon-reduction strategies. “To completely ignore it, in terms of investment decisions,

would be a terrible thing.”

Consumers: Rising energy prices, polluted ecosystems, and growing awareness of climate

change and the geopolitical costs associated with fossil fuels are driving a shift in consumer

attitudes and consumer demand for clean-tech products and services. That’s forcing

companies that sell to consumers – from appliance makers to auto manufacturers to Wal-Mart

– to produce and sell cleaner, more efficient products and to market them aggressively.

Page 15: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 15 http://www.euroasiapub.org

Who is driving this demand and growth, which is also evidenced by the steady expansion of

the LOHAS (Life of Health and Sustainability) demographic sector? Both early adopters,

who installed the first solar PV system in their neighbourhood or purchased an early-model

Toyota Prius, and mainstream customers, who are installing high-efficiency water heaters,

buying higher-mileage cars, insulating their homes with recycled denim, and demanding

efficient Energy Star appliances and windows.

These 21st century consumer preferences don’t seem to be slowed by the dramatic drop in

gasoline prices that began in the fall of 2008. A Consumer Federation of America survey in

February 2009 found that 76 percent of U.S. adults were still concerned about high gas prices

and an equal number worried about American dependence on oil from the Middle East.

Capital: An unprecedented influx of capital is changing the clean-tech landscape, with

billions of dollars, Euros, yen, and Yuan pouring in from a myriad of public and private

sector sources. Since the 1970s, investments in clean technology have moved from primarily

government research and development (R&D) projects to major multinationals, well-heeled

venture capitalists, and savvy individual investors.

General Electric, the world’s largest diversified manufacturer, plans to invest up to $1.5

billion a year in clean-tech R&D by 2010 as part of its “Ecomagination” business strategy.

Spain-based energy giants Iberdrola and Acciona are both poised to spend billions of dollars

building out their clean-energy portfolios, primarily wind power, over the coming years.

Toyota reportedly spends some $8 billion annually in R&D, much of it for hybrid and fuel-

cell development. Sanyo, the fourth largest solar cell manufacturer in the world behind Sharp,

Q-Cells, and Kyocera, has said it will invest $350 million over 5 years to expand its solar

operations as well.

The trend is significant. In 2008, despite its fourth-quarter downturn, venture capital

investments in clean tech (in North America, Europe, China, and India) grew 38% to $8.4

billion, according to research firm The Cleantech Group in San Francisco.

China; Clean tech is being driven by the inexorable demands being placed on the earth not

only by mature economies but also China, India, Brazil, Russia, and other rapidly developing

nations. Their expanding energy needs are driving major growth in clean-energy,

transportation, building, and water-delivery technologies.

China is emblematic of the resource-constraint issues facing our planet; China will not be

able to sustain its growth if it doesn’t widely embrace clean technology. The Chinese

government is starting to understand this and in 2006 committed to investing more than $200

billion over 15 years to meet nationally mandated targets for clean energy. China is planning

Page 16: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 16 http://www.euroasiapub.org

to have 60 gigawatts of renewable energy (not including large hydroelectric) by 2010 and 120

GW by 2020.

Competition: This refers to competition among cities, regions, and nations to attract and

grow clean tech as a core industry for job creation and economic development. Thrust into

the national spotlight in the past year with the focus on “green jobs” as a major component of

U.S. economic recovery, clean tech as a development tool is gaining significant traction.

Whether promoting the retraining of laid-off steelworkers to build wind turbines or

employing inner-city job seekers to weatherize homes in their neighbourhoods, more

governments are seeking (and seeing) the benefits of clean tech-focused development efforts.

These powerful global forces—the six Cs—have put clean tech onto centre stage and

awakened a diverse range of stakeholders across the world. From Beijing to Berlin, from San

Francisco to Bangalore, the clean tech revolution is well under way. It will determine which

regions lead and prosper and which regions are left drowning in their own effluents, choking

on their own emissions, and struggling to compete in a world that is leaner, greener, and less

reliant on fossil fuels.

We believe the choice for investors, companies, governments, and individuals is simple,

especially as we seek a dramatic transition out of our current financial crisis. Be part of one

of the greatest business and economic shifts in recorded human history, or become extinct

like the dinosaurs whose fossils fuelled the last great industrial revolution.

CONCLUSION If we want to sustain ourselves then we have to protect environment and go for sustainable

development by containing our needs and desires putting check on growing populations by

educating the poor’s and down trodden for the protection of environment. The environment

protection depend on the life of health and sustainability that is why we have to go green

rather than offending the environment rules and laws. The government agencies need to be

strict and vigilant about the violation of environment rules. Youth can play the vital roles in

protection and promotion of environment consciousness among the different segment of

societies. The town planning and industrial developments must be environment friendly.

Bearing in mind the long term impact on the life of common man for the sustenance and

survival of biodiversity. The NGOs can play key role in educating the masses about the

negative impact of development and growth of polluting industries and rising population. The

environment education needs to be promoted beyond the curriculum of syllabi with practical

and meaningful approach for the survival of all. The management of resources within the

Page 17: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 17 http://www.euroasiapub.org

means and scope of needs rather than false and illusive living style. Lastly we should be more

practical and close about the realities of life health, safety, environment and climate. If the

climate is neat and clean the environment will be pure and human friendly by which we will

safe and healthy. Health of today is the safety of tomorrow, precaution in today is the

protection for future. Which we can achieve through the sustainable development and active

and agile management of existing resources keeping in mind the needs of coming

generations. The capacity and strength of existing planet earth is limited if we manage over

needs in the lights of existing circumstantial environment then we will be certainly look

forward for the better future. In simple if our planet earth is not safe then we all be on the

dangerous note of destruction and If the nature and earth become violent then neither

offender nor the conservator will survive, there will be no question no answer, no officer no

sub-ordinate, no student no teacher, no king no slave, no raja no wazir, no agitation no

pollution, no poor no rich. All will equal and flat without any structure. As if the fire broke in

forest the fire will burn more live trees then dead wood.

REFERENCES 1. Banuri, T. et al. (1996). "Equity and Social Considerations.” in J.P. Bruce et al..

Climate Change 1995: Economic and Social Dimensions of Climate Change.

Contribution of Working Group III to the Second Assessment Report of the

Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge,

U.K., and New York, N.Y., U.S.A. doi:10.2277/0521568544. ISBN 9780521568548.

2. Grubb, M. (July-September 2003). "The Economics of the Kyoto Protocol". World

Economics 4 (3): 143–189.

3. http://www.econ.cam.ac.uk/rstaff/grubb/publications/J36.pdf. Retrieved 2010-03-25.

4. PBL (24 February 2010). "Dossier Climate Change: FAQs. Question 10: Which are the

top-20 CO2 or GHG emitting countries?". Netherlands Environment Agency website.

5. http://www.pbl.nl/en/dossiers/Climatechange/FAQs/index.html?vraag=10&title=Which

%20are%20the%20top20%20CO2%20or%20GHG%20emitting%20countries%3F#10.

Retrieved 2010-05-01.

6. IEA (2007). "World Energy Outlook 2007 Edition- China and India Insights".

International Energy Agency (IEA), Head of Communication and Information Office, 9

rue de la Fédération, 75739 Paris Cedex 15, France. pp. 600.

http://www.iea.org/publications/free_new_Desc.asp?PUBS_ID=1927. Retrieved 2010-

05-04.

Page 18: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 18 http://www.euroasiapub.org

7. MNP (2007). "Greenhouse gas emissions of countries in 2005 and ranking of their per

capita emissions". Netherlands Environment Agency website.

http://www.pbl.nl/images/Top20-CO2andGHG-countries-in2006-2005(GB)_tcm61-

36276.xls. Retrieved 2010-05-01.

8. http://www.eurojournals.comejss_9_2_08.pdf (Environment Degradation and

Environmental Management By Jamaluddin Md. Jahi, Kadaruddin Aiyub, Kadir Arifin,

Azahan Awang )

9. http://www.buzzle.com/articles/how-do-humans-affect-the-environment.html

(Debopriya Bose)

10. http://www.saferenvironment.wordpress.com/2008/08/18/effects-of-environmental-

degradation/ (Partha Sharma)

11. http://www.suite101.com/content/10-ways-to-help-the-environment-that-are-healthy-

and-save-money-a261944 (Roger Vernon)

Page 19: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 19 http://www.euroasiapub.org

MAXIMUM POWER POINT TRACKING USING PERTURBATION

AND OBSERVATION AS WELL AS INCREMENTAL

CONDUCTANCE ALGORITHM Manoj Kumar*

Dr. F. Ansari**

Dr. A. K. Jha***

ABSTRACT

This paper is comparative study of two type of maximum power point tracking (MPPT). The

optimisation of energy generation in a photovoltaic (PV) system is necessary to let the PV

cells operate at the maximum power point (MPP) corresponding to the maximum efficiency.

Since the MPP varies, based on the irradiation and cell temperature, appropriate algorithms

must be utilised to track the MPP. This is known as maximum power point tracking (MPPT).

Different MPPT algorithms, each with its own specific performance, have been proposed in

the literature. A so-called perturb and observe (P&O) as well as incremental conductance

method is considered here and both are compared. This two method is widely diffused

because of its low-cost and ease of implementation. When atmospheric conditions are

constant or change slowly, the P&O method oscillates close to MPP. However, when these

change rapidly, this method fails to track MPP and gives rise to a waste of part of the

available energy. A comparative study has been done on both the methods by using MATLAB

environment. The MPPT algorithm was set up and validated by means of MATLAB

simulations and experimental tests, confirming the effectiveness of the method.

Keywords: MPPT, MATLAB, Incremental Conductance, Perturb and Observe

*Gateway Inst. of Engg. & Tech

**BIT, Sindri

***Anupam Group of Industries

Page 20: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 20 http://www.euroasiapub.org

INTRODUCTION

The comparisons between the PV water pumping system equipped with a Maximum power

point tracker (MPPT) and the direct coupled system without MPPT has been done with the

addition of a solar tracking using microcontroller. Microcontroller has been used to rotate the

panel so that we can utilize maximum renewable energy in more efficient way.

Also, the design and simulations of MPPT has been done using MATLAB to perform

comparative tests of the perturb and observe (P&O) and incremental Conductance (incCond)

algorithm. Simulations also verify the functionality of MPPT with a resistive load and then

with the DC pump motor load. The comparisons between the PV water pumping system

equipped with MPPT and the direct coupled system without MPPT has been done also solar

tracking using microcontroller has been used so that we can utilize maximum renewable

energy in more efficient way.

The two MPPT algorithms, P&O and incCond, discussed and are implemented in MATLAB

simulations and tested for their performance. Since the purpose is to make comparisons of

two algorithms, each simulation contains only the PV model and the algorithm in order to

isolate any influence from a converter or load. First, they are verified to locate the MPP

correctly under the constant irradiance, as shown in Figure 1.

Figure 1: Searching the MPP (1KW/m2, 25oC)The traces of PV operating point are shown in

green, and the MPP is the red asterisk.

MAXIMUM POWER POINT TRACKER When a PV module is directly coupled to a load, the PV module’s operating point will be at

the intersection of its I–V curve and the load line which is the I-V relationship of load. For

example in Figure 2, a resistive load has a straight line with a slope of 1/Rload as shown in

Figure 3. In other words, the impedance of load dictates the operating condition of the PV

module. In general, this operating point is seldom at the PV module’s MPP, thus it is not

Page 21: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 21 http://www.euroasiapub.org

producing the maximum power. A study shows that a direct-coupled system utilizes a mere

31% of the PV capacity [1]. A PV array is usually oversized to compensate for a low power

yield during winter months. This mismatching between a PV module and a load requires

further over-sizing of the PV array and thus increases the overall system cost. To mitigate this

problem, a maximum power point tracker (MPPT) can be used to maintain the PV module’s

operating point at the MPP. MPPTs can extract more than 97% of the PV power when

properly optimized [2]. This chapter discusses the I-V characteristics of PV modules and

loads, matching between the two, and the use of DC-DC converters as a means of MPPT. It

also discusses the details of some MPPT algorithms and control methods, and limitations of

MPPT.

Figure 2: PV module is directly connected to a (variable) resistive load

Figure 3: I-V curves of BP SX 150S PVmodule and various resistive loads Simulated with

the MATLAB model (1KW/m2, 25oC

MAXIMUM POWER POINT TRACKING ALGORITHMS The location of the MPP in the I–V plane is not known beforehand and always changes

dynamically depending on irradiance and temperature. For example, Figure 4 shows a set of

PV I–V curves under increasing irradiance at the constant temperature (25oC), and Figure 5

shows the I–V curves at the same irradiance values but with a higher temperature (50oC).

There are observable voltage shifts where the MPP occurs. Therefore, the MPP needs to be

located by tracking algorithm, which is the heart of MPPT controller. There are a number of

Page 22: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 22 http://www.euroasiapub.org

methods that have been proposed. One method measures an open-circuit voltage (Voc) of PV

module every 30 seconds by disconnecting it from rest of the circuit for a short moment.

Then, after re-connection, the module voltage is adjusted to 76% of measured Voc which

corresponds to the voltage at the MPP [3]. The implementation of this open-loop control

method is very simple and low-cost although the MPPT efficiencies are relatively low

(between 73~91%) [3]. Model calculations can also predict the location of MPP; however in

practice it does not work well because it does not take physical variations and aging of

module and other effects such as shading into account. Furthermore, a pyranometer that

measures irradiance is quite expensive. Search algorithm using a closed-loop control can

achieve higher efficiencies, thus it is the customary choice for MPPT. Among different

algorithms, the Perturb & Observe (P&O) and Incremental Conductance (incCond) methods

are studied.

Figure 4: I-V curves for varying irradiance and a trace of MPPs (25oC)

Figure 5: I-V curves for varying irradiance and a trace of MPPs (50oC)

PERTURB & OBSERVE ALGORITHM The perturb & observe (P&O) algorithm, also known as the “hill climbing” method, is very

popular and the most commonly used in practice because of its simplicity in algorithm and

the ease of implementation. The most basic form of the P&O algorithm operates as follows.

Figure 6 shows a PV module’s output power curve as a function of voltage (P-V curve), at the

constant irradiance and the constant module temperature, assuming the PV module is

Page 23: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 23 http://www.euroasiapub.org

operating at a point which is away from the MPP. In this algorithm the operating voltage of

the PV module is perturbed by a small increment, and the resulting change of power, ∆P, is

observed. If the ∆P is positive, then it is supposed that it has moved the operating point

closer to the MPP.

Thus, further voltage perturbations in the same direction should move the operating point

toward the MPP. If the ∆P is negative, the operating point has moved away from the MPP,

and the direction of perturbation should be reversed to move back toward the MPP.

Figure 6: Plot of power vs. voltage for BP SX 150S PV module (1KW/m2, 25oC)

INCREMENTAL CONDUCTANCE ALGORITHM To solve the problem of the P&O algorithm under rapidly changing atmospheric conditions

the incremental conductance (incCond) algorithm was proposed [1]. The basic idea is that the

slope of P-V curve becomes zero at the MPP, as shown in Figure 6. It is also possible to find

a relative location of the operating point to the MPP by looking at the slopes. The slope is the

derivative of the PV module’s power with respect to its voltage and has the following

relationships with the MPP.

(1)

(2)

(3)

The above equations are written in terms of voltage and current as follows.

(4)

If the operating point is at the MPP, the equation (4) becomes:

(5)

(6)

Page 24: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 24 http://www.euroasiapub.org

If the operating point is at the left side of the MPP, the equation (4) becomes:

(7)

(8)

If the operating point is at the right side of the MPP, the equation (4) becomes:

(9)

(10)

Note that the left side of the equations (6), (8), and (10) represents incremental conductance

of the PV module, and the right side of the equations represents its instantaneous

conductance.

PI CONTROLLING OF MPPT As shown in Figure 7, the MPPT takes measurement of PV voltage and current, and then

tracking algorithm (P&O, incCond, or variations of two). The PI loop operates with a much

faster rate and provides fast response and overall system stability [4] [5]. The PI controller

itself can be implemented with analog components, but it is often done with DSP-based

controller [4] because the DSP can handle other tasks such as MPP tracking thus reducing

parts count.

Figure 7: Block diagram of MPPT with the PI compensator

COMPARISONS OF P&O AND INCCOND ALGORITHM The two MPPT algorithms, P&O and incCond, discussed are implemented in MATLAB

simulations and tested for their performance. Since the purpose is to make comparisons of

two algorithms, each simulation contains only the PV model and the algorithm in order to

isolate any influence from a converter or load. First, they are verified to locate the MPP

correctly under the constant irradiance, as shown in Figure 8.

Page 25: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 25 http://www.euroasiapub.org

Figure 8: Searching the MPP (1KW/m2, 25oC)

The traces of PV operating point are shown in green, and the MPP is the red asterisk

Next, the algorithms are tested with actual irradiance data provided by [6]. Simulations use

two sets of data, shown in Figure 9, the first set of data is the measurements of a sunny day in

April in Barcelona, Spain, and the second set of data is for a cloudy day in the same month at

the same location. The data contain the irradiance measurements taken every two minutes for

12 hours. Irradiance values between two data points are estimated by the cubic interpolation

in MATLAB functions.

Figure 9: Irradiance data for a sunny and a cloudy day of April in Barcelona, Spain [6]

On a sunny day, the irradiance level changes gradually since there is no influence of cloud.

MPP tracking is supposed to be easy. As shown in Figure 10 & 11, both algorithms locate

and maintain the PV operating point very close to the MPPs (shown in red asterisks) without

much difference in their performance.

Page 26: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 26 http://www.euroasiapub.org

Figure 10: Traces of MPP tracking on a sunny day (25oC) by using P&O algorithm.

Figure 11: Traces of MPP tracking on a sunny day (25oC) by using inccond algorithm.

On a cloudy day, the irradiance level changes rapidly because of passing clouds. MPP

tracking is supposed to be challenging. Figure 12 shows the trace of PV operating points of

P&O algorithm and Figure 13 for incCond algorithm. For both algorithms, the deviations of

operating points from the MPPs are obvious when compared to the results of a sunny day.

Between two algorithms, the incCond algorithm is supposed to outperform the P&O

algorithm under rapidly changing atmospheric conditions [1]. A close inspection of Figure 12

& 13 reveals that the P&O algorithm has slightly larger deviations overall and some erratic

behaviours (such as the large deviation pointed by the red arrow). Some erratic traces are,

however, also observable in the plot of the incCond algorithm.

Figure 12: Traces of MPP tracking on a cloudy (25oC) by using P&O algorithm

Figure 13: Traces of MPP tracking on a cloudy day (25oC) by using inccond algorithm.

Page 27: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 27 http://www.euroasiapub.org

Table 1: Comparison of the P&O and incCond algorithms on a cloudy day.

P&O Algorithm incCond Algorithm

Total energy(simulation) 479.63Wh 479.69Wh

Total energy (theoretical

max)

480.38Wh 480.38Wh

Efficiency 99.85% 99.86%

Total electric energy produced with the incCond algorithm is narrowly larger than that of the

P&O algorithm. The MPP tracking efficiency measured by {Total Energy (simulation)} ÷

{Total Energy (theoretical max)} ×100% is still good in the cloudy condition for both

algorithms, and again it is narrowly higher with the incCond algorithm. The irradiance data

are only available at two-minute intervals, thus they do not record a much higher rate of

changes during these intervals. The data may not be providing a truly rapid changing

condition, and that could be a reason why the two results are so close. Also, further

optimization of algorithm and varying a testing method may provide different results. The

performance difference between the two algorithms, however, would not be large. There is a

study showing similar results [3]. The simulation results showed the efficiency of 99.3% for

the P&O algorithm and 99.4% for the incCond algorithm. The experimental results showed

96.5% and 97.0%, respectively, for a partly cloudy day.

MPPT SIMULATIONS WITH RESISTIVE LOAD First, MPPT with a resistive load is implemented in MATLAB simulation and verified. The

selection of the P&O algorithm permits the use of the output sensing direct control method

which eliminates the input voltage and current sensors. The MPPT design, therefore, chooses

the P&O algorithm and the output sensing direct control method because of the advantage

that allows of a simple and low cost system. The simulated system consists of the BP SX

150S PV model, the ideal Cúk converter, the MPPT control, and the resistive load (6 ). The

MATLAB function that models the PV module is the following:

(11)

The function, bp_sx150s, calculates the module current (Ia) for the given module voltage Va),

Irradiance (G in KW/m2), and module temperature (T in oC). The operating point of PV

module is located by its relationship to the load resistance (R) as explained in Section.

Page 28: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 28 http://www.euroasiapub.org

(12)

The irradiance (G) and the module temperature (T) for the function (11) are known variables,

thus it is possible to say that Ia is the function of Va hence Ia = f(Va). Substituting this into

the equation (12) gives:

(13)

Knowing the value of R enables to solve this equation for the operating voltage (Va).

MATLAB uses fzero function to do so. Appendix for details. Placing Va, back to the

equation (11) gives the operating current (Ia).

For the direct control method, each sampling of voltage and current is done at a periodic

steady state condition of the converter. The following equations describe the input/output

relationship of voltage and current, and they are used in the MATLAB simulation.

(14)

(15)

Where: D is the duty cycle of the Cúk converter.

The simulation is performed under the linearly increasing irradiance varying from 100W/m2

to 1000W/m2 with a moderate rate of 0.3W/m2 per sample. Figure 14 and 15 show that the

trace of operating point is staying close to the MPPs during the simulation. Figure 16 shows

the relationship between the output power of converter and its duty cycle. Figure 17 shows

the current and voltage relationship of converter output. Since the load is resistive, the current

and voltage increase linearly with the slope of 1/Rload on the I-V plane.

Figure 14: operating point between o/p power vs voltage

Page 29: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 29 http://www.euroasiapub.org

Figure 15: operating point between module current vs module voltage

Figure 16: operating point between o/p power vs duty cycle.

Figure 17: operating point between o/p current vs o/p voltage

RESULT ANALYSIS The comparative study of P&O algorithm and incCond algorithm has been observed by

MATLAB simulation. For both algorithms, the deviations of operating points from the MPPs

are obvious when compared to the results of a sunny day. Between two algorithms, the

incCond algorithm is supposed to outperform the P&O algorithm under rapidly changing

atmospheric conditions.

The theoretical study of solar tracking system has been studied which can provide more

benefits compare to simple photovoltaic system. We can utilize maximum renewable energy

Page 30: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 30 http://www.euroasiapub.org

source by solar tracking system, although we can get more solar energy by connecting more

number PV system either series or parallel but it will be more complex as well as costly.

A close inspection reveals that the P&O algorithm has slightly larger deviations overall and

some erratic behaviours (such as the large deviation) pointed by the red Some erratic

behaviour, however, also observable in the plot of the incCond algorithm. Total electric

energy produced with the incCond algorithm is narrowly larger than that of the P&O

algorithm. The MPP tracking efficiency measured by {Total Energy (simulation)} ÷ {Total

Energy (theoretical max)} ×100% is still good in the cloudy condition The simulation results

showed the efficiency of 99.3% for the P&O algorithm and 99.4% for the incCond algorithm.

The experimental results showed 96.5% and 97.0%, respectively, for a partly cloudy day.

REFERENCES 1. K.H.Hussein et al.,“Maximum Photovoltaic Power Tracking: an Algorithm for

Rapidly Changing Atmospheric Conditions” IEE Proceedings – Generation,

Transmission and Distribution – v. 142, page 59-64, January 1995.

2. D.P. Hohm, M. E. Ropp., “Comparative Study of Maximum Power Point Tracking

Algorithms” Progress in Photovoltaic: Research and Applications, page 47-62,

November 2002.

3. J.H.R.Enslin et al.,“Integrated Photovoltaic Maximum Power Point Tracking

Converter” IEEE Transactions on Industrial Electronics, Vol. 44,page 769-773,

December 1997.

4. Hua Chihchiang et al., “Implementation of a DSP controlled Photovoltaic System

with Peak Power Tracking” IEEE Transactions on Industrial Electronics, Vol. 45, No.

1, page 99-107, February 1998.

5. E.Koutroulis et al., “Development of a Microcontroller-Based, Photovoltaic

Maximum Power Point Tracking Control System” In proc. International Journal on

Power Electronics, Vol. 16, No. 1, page 46-54, January 2001.

6. Castañer, Luis & Santiago Silvestre Modelling Photovoltaic Systems, Using PSpice

John Wiley & Sons Ltd, 2002.

7. Abdelmalek Mokeddem et.al.,” Test and Analysis of a Photovoltaic DC-Motor

Pumping System”In proc. of ICTON on mediterranean winter conference. pp 1-

7,2007

Page 31: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 31 http://www.euroasiapub.org

8. Taufik, Akihiro Oi et.al.,” Modeling and Simulation of Photovoltaic Water Pumping

System” IEEE Third Asia International Conference on Modelling & Simulation pp

497-502., May 2009.

9. Anna Mathew et al.,” MPPT Based Stand-Alone Water Pumping System”

International Conference on Computer, Communication & Electrical Technology –

ICCCET2011,pp 455-460., March 2011.

10. N. Hamrouni et.al.,”Measurements and Simulation of a PV Pumping Systems

Parameters Using MPPT and PWM Control Strategies” In proc. of IEEE

Mediterranean conference on Electro technical, pp 885-858., May 2006.

Page 32: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 32 http://www.euroasiapub.org

STUDY OF VARIOUS INDOOR PROPAGATION MODELS Er. Neha Sharma*

Dr. G.C.Lall*

ABSTRACT

Indoor Propagation modeling is demanded for the maintenance of indoors-wireless services.

Propagation models provide estimates of signal strength and time dispersion in many indoor

environments. These data are valuable in the design and installation of indoor radio systems. We propose improving existing channel models by building partitioning technique. Based on

the measurement results the easy-to-use empirical propagation predication models were

derived for both of the buildings with satisfactory accuracy. The result used to determine the

path loss exponent and standard deviation. It similarly shows that the RSS values Vs distance

help in determine the variation in multi-wall model and single wall.

Keywords: Wireless LAN, Ekahau Heat mapper, Visi-site survey, propagation modeling,

GPS.

*HCTM, Kaithal, Haryan

Page 33: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 33 http://www.euroasiapub.org

1. INTRODUCTION Researchers have developed a variety of experimentally or theoretically based models to

predict radio propagation in various frequency bands and for various types of environments.

The past decades has witnessed a phenomenal growth in wireless communication. The need of

wireless technology in offices, and all the working places gives revolution to the indoor

propagation models. Indoor propagation is not influenced by weather conditions like rain,

snow or clouds as outdoor propagation, but it can be affected by the layout in the building

especially the use of different building material. Owing to reflection, refraction and diffraction

of radio waves by objects such as walls, windows, doors and furniture inside the building, the

transmitted signal often reaches the receiver through more than one path, resulting in a

phenomenon known as multi-path fading [1][2].

The mechanism behind electromagnetic waves propagation are diverse, but can generally be

attributed to reflection, scattering, refraction and diffraction. A signal radiated from an

antenna travels along one of the three routes: ground wave, sky wave, or line of sight

(LOS). Based on the operating frequency range, one of the three predominates. In [2], a

review of popular propagation models for the wireless communication channel is

undertaken. Macro cell (typically a large outdoor area), microcell (a small outdoor area), and

indoor environments are considered.

For a small network in a limited area, only manufacturer’s information on the coverage

range is sufficient to deploy the APs. The paper based on a site survey with a lot of

measurements and experimental decisions. One common approach employs surveying of

signal strength information in a particular area.

2. MODEL LOCATION This research began by measuring signal strengths. The result obtained the average signal

strength as well as the standard deviation at each location. Site survey using either a

standard wireless device with a testing software tool or special sophisticated equipment is

undoubtedly indispensable way to test existing WLAN networks - coverage, performance,

etc.

Page 34: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 34 http://www.euroasiapub.org

Fig. 2.1 Coverage area.

The experimental area shown with the help of Google Earth in Fig. 2.1. So, the main goal of

a site survey is to measure standard deviation

2.1 RECEIVED SIGNAL STRENGTH

Wi-Fi wireless networks are everywhere [3]. Visualize all Wi-Fi Networks: Ekahau

HeatMapper will display the coverage area of all the access points in the area on a map.

Fig.2.2 shows that the amplitude of signals varies for different AP’s, which is located at the

experimental area. This can help us to represent the strongest AP. VisiWave provides four

effective methods for capturing data (one point at a time, continuous walks through the

survey area, GPS positioning for outdoor surveys, and a custom dead-reckoning navigation

device) making data collection quick and easy [3][4]. Find Security Problems and Open

Networks:

HeatMapper displays if there are security issues in some networks, and shows the location of

unsecured networks. GPS help to take the distance in feet’s as well as in meters from the

transmitter to receiver.

Fig.2.2 Map Survey for nearby located WiFi’s.

Page 35: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 35 http://www.euroasiapub.org

There are lots of survey done with the help of GPS, which help in getting the exact distance

between the transmitter and the receiver. Ekahau Heat Mapper shows that the signals are

weak of AP1 as we gone far from the building. We can see that the RSS values from that AP

are getting weaker as we move away from it.

2.2 LIMITATION OF INDOOR PROPAGATION MODELS:

Improved Propagation models are required to achieve reliable and accurate propagation and

predictions. The various challenges facing the development of indoor propagation models are

as follows:

1) Propagation measurements primarily dependent on unavailable building construction

parameters such as wall thickness, materials, and indoor building structures.

2) A large number of prediction methods require computation of the effect of reflections

and transmissions and hence become time consuming and computationally ineffective

3) Most of the techniques are applicable to high frequencies thus the dimension of some

indoor structures may not necessarily satisfy the large dimensions compared to the

wavelength criterion required by these methods.

4) Small-scale fading- it causes great variation within a half wavelength. Multipath and

moving scatters cause it. Rayleigh, Ricean, usually approximates resulting fades or

similar fading statistics measurements also show good fit to Nakagami-m and Weibull

distributions [4].

3. PROPAGATION MODEL: 3.1 FREE SPACE PATH LOSS MODEL:

The spatial distribution of power at a distance d from a transmitter is, in general, a decreasing

function of d. A distance power law of the form represents this function

P=l/dm (3.1) For free space, m is equal to 2 and it is said that the power gain follows an inverse square

law. In an enclosed environment, however, this is not true anymore. I showed that when the

transmitter and receiver were placed in the same living room, in sight of each other, the

power decayed with a value of m ranging of 1.5 to 1.8. when the receiver was located within

a room off the hallway, m ranged from 3 to 4.

The path loss also varies with frequency. The measurement results indicate that loss through

floors is greater at the higher frequency. It is found that at wavelengths in the millimeter

range the radio wave cannot penetrate most common building materials such as brick and

concrete block and that signal attenuation occurs more rapidly with distance [5]. Therefore

Page 36: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 36 http://www.euroasiapub.org

the millimeter waveband seems to be a good choice for providing broadband services in a

high-capacity frequency-reuse environment. The equation for FSPL is

(3.2)

Where:

▪ Is the signal wavelength (in meters),

▪ Is the signal frequency (in hertz),

▪ Is the distance from the transmitter (in meters),

▪ C is the speed of light in a vacuum, 2.99792458 × 108 meters per second.

Alexandra has given the values of m according to the building materials used in the

environment. The degree 01 signal attenuation depends on the type of materials the signal

encounters. Consequently, the construction materials can characterize the signal decay in an

indoor environment.

Fig. 3.1 Free Space Path Loss Model

We used visi-site survey software tool to verify the coverage of a specific AP and get a

rough idea of the RSS values related to that AP. After covering the distance of 10 meter

away from the source. It helps in creating the data for the survey which gives all the

information related to wifi signals.

Page 37: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 37 http://www.euroasiapub.org

Fig. 3.2 List if AP, SNR, MAC, SSID etc.

The Fig. 3.1 shows that in free space there is no loss of data between the transmitted signal

and receiver signal. The Fig. 3.2 shows AP list also contain MAC address, Max SNR, Min

SNR, Avg. SNR.

3.2. EMPIRICAL MODELS

Both theoretical and measurement based propagation models indicate that average received

signal power decreases logarithmically with distance. Empirical models help in reducing

computational complexity as well as increasing the accuracy of the predictions [6]. The

empirical model used in this study is Log-distance Path Loss Model.

3.2.1 Log-distance Path Loss Model In both indoor and outdoor environments the average large-scale path loss for an arbitrary

Transmitter-Receiver (T-R) separation is expressed as a function of distance by using a path

loss exponent, n [10][9]. The average path loss PL(d) for a transmitter and receiver with

separation d is:

PL(dB)= PL(d0) +10nlog(d), d0 (3.2)

where n is the path loss exponent which indicates the rate at which path loss increases with

distance d. Close in reference distance (d0) is determined from measurements close to the

transmitter.

3.2.2 LOG-NORMAL SHADOWING

Random shadowing effects occurring over a large number of measurement locations, which

have the same T-R separation, but different levels of clutter on the propagation path, is

referred to as Log-Normal Distribution [7]. This phenomenon is referred to as lognormal

shadowing. This leads to measured signals, which are vastly different than the average value

predicted by (3.2). To account for the variations described above equation (3.2) is modified

Page 38: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 38 http://www.euroasiapub.org

as:

PL(dB) = PL(d0) +10nlog(d)+ Xσ (3.3)

where Xσ is a zero-mean Gaussian distributed random variable with standard deviation σ.

The close-in reference distance d0, the path loss exponent n, and the standard deviation σ,

statistically describe the path loss model for an arbitrary location having a specific T-R

separation.

Table 3.1 Path loss exponents for different environments.

3.2.3 TWO-RAY MODEL

Unlike statistical models, site specific propagation models do not rely on extensive

measurement, but a greater detail of the indoor environment is required to obtain an accurate

prediction of signal propagation inside a building. The received signal Pr for isotropic

antennas, obtained by summing the contribution from each ray, can be expressed as

(3.4)

where Pt is the transmitted power, r1 is the direct distance from the transmitter to the

receiver, r2 is the distance through reflection on the ground, and Γ(α) is the reflection

coefficient depending on the angle of incidence α and the polarization [8].

Fig.3.3 Two ray model.

Page 39: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 39 http://www.euroasiapub.org

The reflection coefficient is given by

(3.5)

where Θ = 90-α and a = 1/ε or 1 for vertical or horizontal polarization, respectively. εr is a

relative dielectric constant of the reflected surface []. The signal strengths from theoretical

and empirical models are compared in this study.

3.3 DETERMINISTIC MODELING APPROACH

Deterministic or semi-deterministic models are primarily based on electromagnetic wave

propagation theory being as close to physical principles as possible. Most of the models

known as ray tracing or ray launching are based on geometrical optics. Some simplifications

lead to viewing the radio wave propagation as optical rays. It can be seen that diffraction and

wave guiding effect of the corridor are considered. Since the multipath propagation can be

fully de- scribed, other space-time properties like time delays; angles of arrival etc. can be

determined. On the other hand, for a common planning only the propagation loss is sufficient

and the cost for the accuracy is enormous [4][7].

3.4. PARTITIONED MODEL

These models are very easy and fast to apply because the prediction is usually obtained from

simple closed ex- pressions. Also requirements on the input environment description are

“reasonable”. But, at the same time, only the propagation loss without great site-specific

accuracy can be predicted.

3.4.1 SINGLE-GRADIENT MULTI-FLOOR (SGMF) MODEL

The idea behind this model is that the distance dictates if the AP and receiver are located on

the same floor the path-loss from the AP to the receiver using a distance power-gradient. The

path-loss in the SGMF model is given by

Lp=L0+Lf(n)+10a *log(d) (3.6)

Where L0 is the path-loss over the first meter, Lf (n) is the attenuation attributed to each floor,

n is the number of floors between the transmitter and receiver, α is the distance- power

gradient, and d is the distance between the transmitter and receiver [8]. The Table 3.2 gives

the set of parameters suggested for three different environments.

Page 40: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 40 http://www.euroasiapub.org

Fig.3.4 The performance of the second floor and ground floor

Fig.3.5 Signal Strength for SGMF

Page 41: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 41 http://www.euroasiapub.org

Table.3.2 Measurement for RSS values at first floor.

Fig. 3.4 displays that the performance of the signal strength with distance at second floor and

ground floor. This graph shows that the performance of second floor is better than ground

due to the presence of two AP’s at the same time. For showing the performance of first floor

we used MATLAB in Fig. 3.5.

The formula for the SGMF+BP model is given by:

(3.7)

Where Lp is the path-loss over distance d in dB, L0 is the path-loss over the first meter in dB,

Lf (n) is the attenuation attributed to each floor, n is the number of floors between the

transmitter and receiver, α1, and αE are the distance-power gradients for the respective path

sections, and dwbp is the dynamic AP specific wall breakpoint in meters [9].

Page 42: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 42 http://www.euroasiapub.org

Fig.3.6 Performance of Partitioned Models at first floor

3.4.2 MULTI-GRADIENT SINGLE-FLOOR(MGSF) MODEL

The Multi-Gradient Single-Floor (MGSF) model most recently has been used to model the

WiFi propagation path-loss in indoor environments.

The distance partitioned MGSF model,

(3.9)

Where Lp is the path-loss over distance d in dB, L0 is the path-loss over the first meter in dB,

α1 and α2 are the distance-power gradients for the path sections one and two respectively,

and dbp is the breakpoint distance in meters. Table 4.1 gives suggested parameter sets for

three environments defined for 802.11 standard in reference [9][10].

Fig.3.7 Building Partitioned model

Page 43: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 43 http://www.euroasiapub.org

Table 3.3 Indoor Residential LOS and NLOS values

The MGSF+BP model’s distance-power gradient was larger than the internal path distance-

power gradient, which does not fit with the known path-loss environment. The interior paths

should have higher path-loss due to interior wall and other physical obstructions. Fig.3.7

displays the building portioned model with the help of Table3.3 and Table 3.4.

Table 3.4 MGSF standards for calculations

The First floor of the building taken into consideration for the MGSF model. In this we

selected some distance from AP to calculate the path loss model in this area. Three AP’s is

assigned nearby so that the signal for each AP will be approximately same. In Fig.3.8 the

performance of partitioned models will be differ in single floor with multiple floors. The

distance-power gradient for this model is most likely artificially high due to the absence of

the exterior wall path-loss and should result in lower performance than the other model with

the exterior wall path-loss.

Page 44: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 44 http://www.euroasiapub.org

Fig.3.8 Coverage distance model prediction to empirical data comparison for each AP

The over predicted coverage could also be attributing to the higher mean RSS predicted by

the models. Fig.3.9 and Fig.3.10 Signal Strength for MGSF and MGSF-BP. To overcome this

short fall the footprints of the surrounding building could be added to future models.

Fig.3.9 Signal Strength for MGSF

5. RESULT ANALYSIS In this paper, we have pointed out the importance of propagation models in the development

of indoor wireless communications. Propagation models provide estimates of signal strength

and time dispersion in many indoor environments. These data are valuable in the design and

installation of indoor radio systems. Site-specific propagation modeling by solving

Maxwell’s equations is costly and impractical. The inclusion of diffraction theory can

broaden its application to lower radio frequencies. The accuracy of ray-tracing techniques

depends heavily on the accuracy and detail of the site-specific representation of the

propagation medium. The SGMF model had a higher peak performance but the MGSF

model had a slightly higher mean performance. The two methods for the design of large

wireless local area networks site survey and software planning were compared. The

drawbacks of site survey due to the time and space-varying environment were investigated

Page 45: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 45 http://www.euroasiapub.org

using a simple experiment. The overview of available propagation models and its usage was

given.

REFERENCES 1. SUZUKI, H.: ‘A statistical model for urban radio propagation’, IEEE Trans.

Communication July 1977,COM-25, pp.673-680

2. HASHEMI, H.: ‘Simulation of the rural radio propagation channel’, IEEE Trans. Veh.

Technol., August 1979, VT-28, pp.213-224

3. BAJWA,AS.: ‘UHF wideband statistical model and simulation of mobile radio

multipath propagation effects’, IEE Proc. F., August 1985,132,(51,pp.327-333

4. RAPPAPORT, T.S., SEIDEL. S.Y., and SINGH, R : ‘900 MHz multipath propagation

measurements for US digital cellular radio telephone’, IEEE Trans. Veh. Technology

May 1990, VT-39, (2). pp.132-139

5. RAPPAPORT, T.S., SEIDEL, S.Y., and TAKAMIZAWA, IC: ‘Statistical channel

impulse response models for factory and open plan building radio communication

system design’, IEEE Trans. Communication May 1991.39, (5). pp.794-807

6. HASHEMI, H., THOLL. D., and MORRISON, G.: ‘Statistical modeling of the indoor

radio propagation channel part I’. Proc. IEEE Vehicular Technology Conference,

WC’92, Denver, CO, May 1992, pp.33&342

7. HASHEMI, H., LEE, D., and EHMAN. D.: ‘Statistical modeling of the indoor radio

propagation channel: part 11’. Proc. IEEE Vehicular Technology Conference, WC’92,

Denver, CO.,May 1992,pp.839843

8. HASHEMI, H.: ‘Impulse response modeling of indoor radio propagation channels’,

IEEE J. Sel. Areas Communication September 1993,SAC-11,pp.1788-1796

9. Ben Slimane, S. & Gidlund, “Performance of wireless LANs in radio channels”, IEEE

Multi-access, Mobility and Telegraphic for Wireless Communication December 2000,

5, 329-40.

10. MCKOWN, J.W., and HAMILTON, R.L.: ‘Ray tracing as a design tool for radio

networks’. IEEE Network, November 1991,5, (6),pp.27-30

11. www.metageek.net/products/inssider

12. www.earth.google.com/

13. www.visiwave.com/

Page 46: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 46 http://www.euroasiapub.org

GIS SOLUTION FOR ENVIRONMENTAL MANAGEMENT AND

NATURAL RESOURCES DEVELOPMENT Nitin Sukhadevrao Goje*

Dr. Ujwal A. Lanjewar**

ABSTRACT Environmental management is inherently a spatial endeavor. Its data are particularly

complex as they require two descriptors; namely the precise location of what is being

described, as well as a clear description of its physical characteristics. For hundreds of

years, explorers produced manually drafted maps which served to link the “where is what”

descriptors. With an emphasis on accurate location of physical features, early maps helped

explorers and navigators chart unexplored territory.

The current surge of interest in environmental information springs from the convergence of

three profound world-wide trends: environmental awareness, liberation of public affairs, and

information technology. Degradation of environmental resources (air, water, soil and

biodiversity) has mobilized public opinion. This is because these resources intimately and

directly affect the quality of our lives. As a result the public demands to be better informed on

the state of the environment. In turn, governments and industries need spatial information in

order to manage and utilize the environmental resources in a sustainable manner.

The past two decades have witnessed dramatic advances in Information Technology. Spatial

data processing has advanced to the point where it matches the applications challenges

presented by the natural resource management. In addition, the Internet, Geomatics, and

Telecommunications are rapidly changing the way natural resources are being managed and

protected. These have provided more accurate and up-to-date information about resources;

further the information is readily available to would be users. In this paper we discuss the

contribution of Geographic Information System in Natural Resources Development and

Environment Management

Keywords: GIS, Environmental Management, Natural Resources Development, Data Model.

*Assistant Professor, ITM Institute of Management & Research, Kamptee, Nagpur.

**Professor, VMV Arts, JMT Commerce and JJP Science College, Nagpur.

Page 47: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 47 http://www.euroasiapub.org

INTRODUCTION Responsible and successful environmental management is necessary for protecting and

restoring the natural environment. The interdependency of the earth’s ecosystems and the

human impact on the environment present complex challenges to governments and

businesses as well as scientists and environmentalists in every discipline.

Geographic information system (GIS) technology is used to support and deliver information

to environmental managers and the public. GIS allows the combination and analysis of

multiple layers of location-based data including environmental measurements. The

environmental application areas of GIS are varied in terms of potential users, environmental

spheres, and the specific environmental issue being investigated. [7]

OBJECTIVE Objective of the paper is to study the various aspects of environmental management and

natural resources development and coming to the conclusion that the solutions provided by

the Geographic Information System.

GIS environmental management solutions enable organizations to

• Ensure accurate reporting with improved data collection.

• Improve decision making.

• Increase productivity with streamlined work processes.

• Provide better data analysis and presentation options.

• Model dynamic environmental phenomena.

• Create predictive scenarios for environmental impact studies.

• Automate regulatory compliance processes.

• Disseminate maps and share map data across the Internet.

LITERATURE SURVEY What is GIS?

GIS is a powerful software technology that allows a virtually unlimited amount of

information to be linked to a geographic location. Coupled with a digital map, GIS allows a

user to see locations, events, features, and environmental changes with unprecedented clarity,

showing layer upon layer of information such as environmental trends, soil stability, pesticide

use, migration corridors, hazardous waste generators, dust source points, Lake Remediation

efforts, and at-risk water wells. Effective environmental practice considers the whole

spectrum of the environment. GIS technology offers a wide variety of analytical tools to meet

the needs of many people, helping them make better decisions about the environment. [7]

Page 48: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 48 http://www.euroasiapub.org

People in the environmental management community use GIS to organize existing

information and communicate that information throughout their organizations. GIS can be

used as a strategic tool to automate processes, transform environmental management

operations by garnering new knowledge, and support decisions that make a profound

difference on our environment.

GIS in Environmental Management and Natural Resources Development

GIS is a vital tool in natural resources development. The various aspects of resource

management it supports include storage and retrieval of data, interpretation and analysis of

the resource data, and development of the Resource Management Plans (RMP's). Resource

use alternatives are formulated, and the GIS is used to evaluate each in terms of

environmental impact, economic implications, acreage, and potential use conflict. One

important function of GIS is to assist in recognizing underlying patterns in data. These

patterns may be areas of forestland suitable for timber harvest or potential shifts in population

distribution. GIS simulations can be used to understand the direct and indirect effects of

human activities over long periods of time and over large areas.

By using the database integration capabilities of GIS, Planners and Resource Managers gain a

better understanding of the complex interrelationship between physical, biological, cultural,

economical, and demographic considerations around a specific resource. Access to this

information and its understanding makes it essential in making sound resource-use decisions.

This ensures balanced management and use of the resources. GIS is increasingly replacing

the traditional methods because it is faster, cost efficient and accurate. GIS analyses are hence

becoming routine in a significant number of field offices. [6]

Examples of GIS Application in Natural Resources Development

GIS applications are diverse and include water quality monitoring, modeling narcotic crop

sites, waste site assessment, analyzing effects of carbon dioxide etc. Some analyses relative to

forest are overlaying forested areas and logging areas to see what percentage of forest area is

in danger of degradation. Adding data on protected areas or biodiversity hot spots allows one

to see how these areas fit in the picture. Egregious problems, such as protected areas being

included in logging concessions can also be detected.[6]

One notable example is the detection of illegal oil and gas drainage from public lands by

wells on private lands. GIS reduces the process of drainage detection from several days done

manually to a few hours.

Page 49: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 49 http://www.euroasiapub.org

GIS ENVIRONMENTAL DATA MODEL Object of data modeling of environment are both its basic components: physical-geographical

sphere and social-economical one as well. Process of data modeling of environment can be

simply imagined on the basis of Following Fig 1.

Fig. 1. GIS Environmental data model [8]

Model of subjected environment created by geodetic, cartographic and photogrammetric

methods should have structure, contents and accuracy enabling to re-create it functional data

model. This data model should not only be able to be processed by computer technologies,

but also to be simply used for many purposes and users. [8]

FINDINGS & SUGGESTIONS Environmental developers and planners work together to bring the environmental

management community benefit and value from GIS. The model given in above figure gives

various benefits for environmental Management and Natural Resources Development.

Above model adds following benefits

• Database-sharing architecture that supports decision making and daily work tasks

• Interoperable system solutions for integrated workflow and data access

• Internet mapping solutions that support interagency collaboration projects

• Quality control processes that ensure accurate, high-quality data

• Worker-friendly designs that increase agency-wide access and application

• Scalability that supports and adapts to growing and evolving IT demand

We suggest that applying above data model in environmental management and natural

resources development can help in accessing accurate data with high speed. It required to

integrate above model by using various programming languages, databases and GIS

analytical engines.

CONCLUSION & FUTURE SCOPE Even though obstacles remain to their full deployment, Geomatics technologies now being

developed and demonstrated suggest natural resource applications that were not believed

possible using traditional techniques. As we progress towards the long talked about notion of

Page 50: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 50 http://www.euroasiapub.org

integrated natural resources development and environmental management, some parallel

continuums along which the technology manifests are:

- The technology helps create integrated views of databases that span the levels of

map scale, detail and use. This helps in understanding the earth’s ecology.

- The technology meets the need for information presentation tools, as the pendulum

swings towards community place based management.

- The emergence of shared data infrastructure and accelerated information delivery,

e.g. Internet data ordering.

- Significant advances in data acquisition technology.

- Rapid improvement in data storage, retrieval and analysis.

REFERENCES 1. Fedra, K. (1993) GIS and environmental modelling. In: Environmental Modelling

with GIS (ed. by M. F. Goodchild, B. O. Parks & L. T. Steyaert), 35-50. Oxford

University Press.

2. Goodchild, M. F. (1993) Data models and data quality, problems and prospects. In:

EnvironmentalModelling with GIS (ed. byM. F. Goodchild, B. 0. Parks &L. T.

Steyaert), 94-103. Oxford University Press.

3. Harris, J., Gupta, S., Woodside, G. & Ziemba, N. (1993) Integrated use of a GIS and a

three-dimensional, finite-element model: San Gabriel Basin groundwater flow

analyses. In: EnvironmentalModelling with GIS (eu. by M. F. Goodchild, B. O. Parks

& L. T. Steyaert), 168-172. Oxford University Press.

4. Hassan H. M., Hutchinson C., 1992, Natural Resources and Environment Information

For Decision Making, The World Bank.

5. Johannsen C.V., Sanders J. L., 1982, Remote Sensing for Resource Management, Soil

Conservation Society Of America, Michigan, USA.

6. TS12.2 James Osundwa: The Role of Spatial Information in Natural Resource

Management International Conference on Spatial Information for Sustainable

Development Nairobi, Kenya 2–5 October 2001.

7. www.esri.com/environment/ GIS solution for Environmental Management.

8. ArtInAppleS, Ltd.: ArtGIS Training Program. ArtInAppleS, Ltd. Bratislava,

www.artinapples.sk.

Page 51: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 51 http://www.euroasiapub.org

PHYSICO-CHEMICAL ANALYSIS OF YAMUNA WATER AT

MATHURA Suman Yadav*

Dr. K.C. Gupta*

ABSTRACT

Physico-chemical Properties of water of Yamuna River at Mathura, (UP) were

studied. The time period of study was July 2009 to June 2010. Three sampling

stations were selected for study. The parameters studied were Temperature,

Turbidity, pH, DO, BOD, COD, Total Dissolved Solids and Suspended Solids.

Almost all the parameters were found above the tolerance limit.

Keywords: Pollution, Pollutants, D.O., B.O.D., C.O.D., Turbidity, Effluents, TDS,

TSS.

*Deptt. of Chemistry, Singhania University, Jhunjhunu, (Raj.)

Page 52: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 52 http://www.euroasiapub.org

INTRODUCTION

Pollution is one of the most challenging problems today. The unwanted substances

are being regularly added to our environment, making it unsafe to live. Population

growth, rapid economic development, industrialisation and unconscious human

activities are slowly transforming our planet into a rotten place. The balance of

nature has been so adversely affected that we are facing with, frequent floods in

some areas and severe draught in the others.

Mathura (U.P.) is considered to be a historical and holy place, being the birth-place

of Lord Krishna, millions of pilgrims from every corner visit Mathura every year

and use to take bath in the holy river Yamuna. Their stay in the city causes a severe

sewage and garbage disposal problem. The sewage along with the garbage is

disposed off either directly or indirectly into the river Yamuna through a number of

wide drains and results in heavy water pollution.

Furthermore, Mathura is a fast developing city. A number of small and large

industries are working here, which use very fast, harmful and non-biodegradable

chemicals like sulphuric acid, silica powder, hydrochloric acid, detergents

including alkyl benzene sulphonate and linear alkyl sulphonate and several dyes

containing cyanides, arsenic, cadmium, mercury and led compounds. Their

menacing effects have been manifested in the form of the death of thousands of

aquatic organisms.

MATERIALS AND METHODS The sampling was done in second week of each month in glass bottles with

capacity 300 ml. The physico-chemical parameters of the water were determined on

the spots, with the help of ‘Portable water detection kit’ (Model no. CK-710,

manufactured by ‘Century Instruments Pvt. Ltd., Chandigarh). The temperature

was measured on the spot by using temperature sensitive electrodes of the portable

water detection kit. Other physico-chemical parameters from samples were

determined in the laboratory using the method suggested by APHA (1985) and

NEERI manual (1986). The results were compared with standard permitting

parameters as suggested by WHO and ISI. For digestional and pre concentration of

water samples, standard methods were followed (Chakraborty et.al. 1987 and

Subramaniam 1987).

Page 53: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 53 http://www.euroasiapub.org

RESULTS AND DISCUSSION Temperature –

Temperature is an important physical factor which control the natural processes of

the environment. It was observed in accordance with the seasonal changes. It

ranged between 16.5–35.9 oC. It was higher in May, June and July and lower

during winter months i.e. December and January.

Turbidity –

Turbidity is generally caused by untreated and undecomposed organic matter,

sewage and industrial waste. It was very high in July and August because of the

‘Janmashtami’ and Shravan Maas’ when there is a mass gathering in the city and

millions of peoples take bath in Yamuna river. It was noted minimum 64 NTU and

maximum 131 NTU.

pH –

pH shows the acidic or alkaline nature of water. The water of river Yamuna was

found slightly alkaline. It ranged between 7.1 – 8.6. It showed similar trend with

Mathur et.al. (1987), Dakshini et.al (1979), Kumar and Sharma (2005) and Singh

et. al (1988).

TABLE -1 PHYSICO-CHEMICAL PARAMETERS OF RIVER YAMUNA FROM JULY 2009 TO JUNE 2010

(Average value of three sites)

Parameters Units Rains Winters Summers

Jul Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun

Temperature oC 30.8 30.4 23.4 22.7 22.4 19.1 16.5 21.1 23.9 28.0 33.4 35.9

Turbidity NTU 121 125 99 111 83 104 64 87 81 79 89 131

pH … 7.1 7.2 7.6 8.4 8.5 7.9 8.6 7.7 7.6 8.5 8.4 7.5

D.O. Mg/lit. 2.2 2.4 4.7 6.6 3.9 9.8 8.7 4.6 11.8 6.5 2.8 1.8

B.O.D. Mg/lit. 35.1 34.2 7.8 8.9 17.6 5.8 5.4 19.8 12.0 22.1 44.8 47.0

C.O.D. Mg/lit. 44.1 22.5 15.3 19.4 32.1 17.9 12.3 33.5 17.2 59.4 44.4 60.5

T.D.S. Mg/lit. 501 421 412 506 622 455 413 512 648 605 606 698

T.S.S. Mg/lit. 419 387 412 401 446 521 346 421 458 401 502 512

Page 54: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 54 http://www.euroasiapub.org

Dissolved Oxygen :-

Dissolved oxygen is essential for the decomposition of chemical waste and dead

organic matter. It show variable trend. It was maximum in winter but lower in

summer. It ranged between 1.8 -11.8 mg/l. (Kumar & Sharma, 2004).

BOD :-

BOD is the amount of oxygen required by living aquatic organisms for their

physiological process. It was found very high in summer and comparatively low in

winter. It ranged between 5.4-47.0 mg/l. The findings were similar to those

observed by Kumar & Sharma (2005).

COD :-

It is the amount of oxygen required for the decomposition of chemical waste. A

high value of COD shows a higher accumulation of organic waste in the pond. It

was found higher during summer (60.5 mg/l) and lower during winter (12.3 mg/l).

Which was in accordance with the observations made by Shankar et. al (1986),

Reddy et. al (1985) and Sangu et. al. (1983).

TDS :-

Total dissolved solids also serve as indicator of pollution. Trend was found to be

highly fluctuating. It ranged between 412 - 698 mg/l. (Saxena et. al. ,1993 and

Siddiqui et. al, 1994).

TSS :-

Total suspended solids were found very fluctuating. TSS were higher in summer

and lower in winter and ranged between 346 - 521 mg/l. The findings were similar

to those observed by Mathur et. al (1987), Saxena et. al (1991) and Shahji et. al

(1993).

Summary & Conclusion –

From the above observations it was concluded that Yamuna river is highly polluted

and the use of its polluted water may cause various diseases. Remedial measures

are required to sustain the good quality of water and also to save the life of people.

REFERENCES

1. APHA (1992), “AWWA. WFCW in Standard Method for the examination of

water and waste water. American Public Health Association, New York.

Page 55: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 55 http://www.euroasiapub.org

2. Bhaumik B.M. and Singh A.K. (1985), ‘Phytoplankton population in relation

to physico-chemical factor of river Ganga at Patna’. Ind. J. Eco. 12(2) : 360-

364.

3. Dakshini K.M.M. and Soni J.K. (1979), ‘Water quality of sewage drains

entering Yamuna at Dehli’. Ind. J. Env. Hlth., Vol. 21, No. 4, 354-360.

4. Kumar Praveen and Sharma H.B. (2004) ‘ Studies on fluctuating trends in

some aquatic micro-organisms of Radha Kund at Mathura’ Flora & Fauna,

Vol. 10, no. 1, 22-24.

5. Kumar Praveen and Sharma H.B. (2005) ‘Physico-chemical characteristics

of lentic water of Radha Kunda (District Mathura)’. Ind. J. of Env. Sc. 9(1),

21–22.

6. Mathur A.,Y.C. Sharma, D.C. Rupainwar, R.C. Murthy and S.V. Chandra

(1987), ‘A study of river Ganga at Varanasi with special emphasis on heavy

metal pollution’. Poll. Res., 6(1):37-44.

7. Reddy M. and P.V. Venkateshwaralu (1985), ‘Ecological studies in the

paper mill effluents and their impact on river Tungabhadra : Heavy metals

and algal’. Proc. Ind. Acad. Sc. (Plant Sci.), 1985(3) :139-146.

8. Sangu R.P.S., P.D. Pathak and K.D. Sharma (1983), ‘Monitoring of Yamuna

river at Agra’. Proc. of the Nat. Confr. On river Poll. And human health.

9. Saxena K.K. and R.R.S. Chauhan (1993), ‘Physico-chemical aspects of

pollution in river Yamuna at Agra’. Poll. Res., 12(2) :101-104.

10. Shaji C. and R.J. Patel (1991), ‘Chemical and biological evaluation of

pollution in the river Sabarmati at Ahemadabad, Gujrat Phycos’. 30 : 981-

1000.

11. Shankar V., R.P.S. Sangu and G.C. Joshi (1986), ‘Impact of distillery

effluents on the water quality an eco-system of river Reh in Doon Valley’.

Poll. Res., 5(3&4): 137-142.

12. Sharma K.D., Lal N. and Pathak P.D. (1981), ‘Water quality of sewage

drains entering Yamuna at Agra’. Ind. J. Env. Hlth., Vol. 23 no. 2 : 118-122.

13. Shekhar S. (1985), ‘Studies on river pollution on river Cauveri’. Ind. J. Env.

St., 23 : 115-124.

14. Siddiqi Z.M., R.S. Panesar and S. Rani (1994), ‘Bio-chemical effect on few

sewerage disposal on the water quality of Sutlez river’. I.J.E.P., 14(10) :

740-743.

Page 56: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 56 http://www.euroasiapub.org

15. Singh J.P., P.K. Yadav and L.Singh (1988), ‘Pollution status on Sangam and

its adjoining river before the Kumbh Mela at Allahabad’. I.J.E.P., 8(11):

839-842.

16. WHO (1984), ‘International Standard for water.’ Third ed. Geneva.

Page 57: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 57 http://www.euroasiapub.org

DIVERSITY, ECOLOGICAL STRUCTURE AND CONSERVATION OF

HERPETOFAUNA IN TRANS YAMUNATIC REGION OF MATHURA Dr. H. B. Sharma*

Mamta Warman**

INTRODUCTION Biological diversity is fundamental to the fulfilment of human needs. An environment rich in

biological diversity offers the broadest array of options for sustainable economic activity, for

sustaining human welfare and for adapting to change. Loss of biodiversity has serious

economic and social costs for any country. The importance of biodiversity can be understood,

it is not easy to define the value of biodiversity, and very often difficult to estimate it. River

Yamuna , with a total length of around 1,370 kilometers (851 mi), is the largest tributary of

the Ganges in northern India. Yamuna is considered the most sacred among all the rivers as

per Hindu mythology. Its source is at Yamunotri, in the Uttarakhand Himalaya, in the

Himalayan Mountains. It flows through the states of Delhi, Haryana and Uttar Pradesh,

before merging with the Ganges at Allahabad. The cities of Delhi, Mathura and Agra lie on

its banks.With gradual increase in human population, pressure on land for agriculture,

urbanization, industrialization and developmental activities, the wetlands are severely

endangered and decaying day by day. Wetlands provide the habitats for fauna and flora.

Wetlands also serve as life support system by helping in water quality improvement, flood

control, recharging of ground water, storm protections, shoreline stabilization, and regulation

of hydrological regime, conservation of biological diversity and reduction of sediment loads

to the water bodies.

*HOD Dept.of ZOOLOGY, BSA College Mathura

**Scholar

Page 58: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 58 http://www.euroasiapub.org

STUDY AREA Ecology of this area is a less explored subject with very rare studies on herpetofauna . We

describe as per our knowledge this is the first description of initial studies of this region. It

extends between 27°30′N 77°58 ′E longitude and 27.58°N 77.70°E latitudes, on the alluvial

flood plain of the ganga which is fed by its tributary Yamuna. It is touched by twelve

Village Development Committee at Mathura. Gokul barrage has been constructed to trap the

Yamuna. Reptiles and amphibian species were served during July, August & September

(Monsoon period). The impact of this precipitation is mostly influencing the water flow in the

rivers through flooding. During the non-monsoon period (October to June) the river flow

reduced significantly and some rivers stretches become dry. Just opposite of this, during

monsoon period the rivers receives significant amount of water, which is beyond their

capacity and resulting in flood. The River Yamuna also experiences such periods of drought

and floods. Yamuna River carries almost 80% of total annual flow during monsoon period.

The water flow reduces significantly during non-monsoon period and that too diverted from

river and extensively used for irrigation and drinking purpose, leaving very little or no water

flow in the river.

Satellite view of river Yamuna in Mathura

Page 59: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 59 http://www.euroasiapub.org

Flow of river Yamuna through Mathura

METHODLOGY Each zone of the area is studied on the basis of following geographical divisions-

Forest

Water bodies- ponds, river, and wells.

Open fields

Rocky areas

All the water bodies were sampled for aquatic amphibians and soil was dug to determine the

presence of burrowing species. Each was randomly (biased) explored on the basis of habitat

structure and possibility and availability of the species. All important major and minor water

bodies, including seasonal rivulets were extensively explored for Herpetofaunal species.

Identification was done according to diagnostic keys provided by Smith (1935), Daniel

(1963). Sampling was conducted at each study site for these consecutive days . Mathura

Upstream at Vrindavan near Chirharan Ghat this location of river is being monitored to assess

the water quality of Yamuna before it enters Vrindavan – Mathura. Mathura Downstream at

Gokul Barrage. The site depicts the impact of wastewater discharges from

Mathura-Vrindavan city. Amphibian and reptile visual encounter surveys (ARVES) were a

standard method for terrestrial herpetofauna inventories (Campbell and Christman 1982,

Corn and Bury 1990, Crump and Scott 1994).In this method, the study sites were walked for

Page 60: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 60 http://www.euroasiapub.org

the prescribed time period, systematically searching for the herpetofaunal species. The

species were searched throughout the region in water sources, nearby the water reservoirs and

also on the trees, to ensure the covering of each type of habitat. Many techniques have been

described for the inventory and monitoring of amphibian and reptile populations (Gibbons

and Semlitsch 1981; Heyer et al. 1994; Olson et al. 1997).Calling is the basic characteristic

of the male frogs. During the breeding season they produce sound to attract the female frogs.

The calling quality of each species is quite different and may use as a distinguishing character

of the frog species. This unique character of the frog was used in this method. Sound was

followed by the researcher to search the frogs. Identification based on their sound in

Rajasthan was made by Sharma (2005a,b).The sound spectrum identification and taxonomic

categorization is not only precise but environment friendly also because this does not involve

unnecessary killing and fixation of animal and data transformation is also very fast. Sharma

(2005a) and his associates are using this technique to monitor the anuran species in their

habitats Similarly some lizard species like H. brookii etc. Produce characteristic sound so

they were also we found during the survey by this method. Transect sampling was applied to

search reptiles in elevation gradients from lowlands to uplands depending on the area of study

sites. In this the map of study site was marked to over most of the vegetation and aquatic

habitats. Five plots, a 100 m ×6 m each, leaving a gap 50 m ×100 m was used as a transect.

Amphibians and Reptiles are often found in specific microhabitats or patches such as

underneath the logs of trees, holes and boulders. Patch sampling can be used to determine the

number, relative abundance and densities in such patches. Some reptile species are common

to breed in rock caves, loose barks etc. similarly amphibians are in breeding ponds and water

bodies. Data on the sex ratio, size and patterns could be collected at such sites.

OBSERVATION

In and around of trans yamuna are many notable wetlands like rivers, floodplain, riverine

marshes, fresh water marshes and ponds, seasonally flooded grassland, swamp forest,

reservoir, paddy fields etc. These wetlands are the suitable habitat for the herpetofauna, more

then 25 species were find out. Yamuna supports a wide variety of plants and animal species

.The river is home of many herpetofaunal species. The bank of river are one of the most

dwelling place of such species. All other visits were conducted during spring daytime hours

from march last to june mid of the couple of year. Species were identified visually (adults,

larvae, eggmasses) and the type of their wet microhabitat was described (brook, ditch, humic

substrate, inundated land, low-land stream, puddle, well, wet grass.

Page 61: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 61 http://www.euroasiapub.org

Source: Metrological Department Mathura

Observation Table

MIN MAX HUMIDITY RAINFALL TOTAL

RAINFALL

JAN 4.48 19.25 55.33 - -

FEB 6.37 22.87 51.33 - -

MAR 14.33 32.00 40.33 - -

APR 20.6 38.26 35.33 48 48

MAY 23.66 39.33 51.00 96 144

JUN 23.74 35.33 58.00 214 358

JUL 24.70 34.61 77.00 308 666

AUG 24.80 33.80 84.33 308 974

SEP 23.16 33.1 72.66 113 1087

OCT 20.19 33.29 65.00 - 1087

NOV 15.00 27.53 53.33 - 1087

DEC 10.25 22.38 58.33 - -

List of herpetofauna species found in trans Yamuna at Mathura

s.no Species Name Common Name

1 Hardella thurjii Brahminy Terrapin or Kali Kauntha

2 Starred tortoise Geochelone elegans

3 Kachuga tentoria Indian tent terrapin

Page 62: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 62 http://www.euroasiapub.org

4 Kachuga tecta Roofed terrapin

5 Calotes versicolor Blood Sucker

6 Calotes jerdoni Jerdon’s Blood Sucker

7 Calotes rouxi Forest calotes

8 Sitana ponticeriana Fan throated lizard

9 Hemidactylus flavivirdis Northern house gecko

10 Hemidactylus brookii Brook’s Gecko

11 Cosymbotus platyurus Frilled house gecko

12 Mabuya carinata Common Skink

13 Lygosoma punctatus Snake skink

14 Varanus bengalensis Common Indian Monitor

15 Varanus flavescens Yellow Monitor

16 Varanus griseus Desert Monitor

17 Ramphotyphlops braminus Blind snake

18 Eryx conicus Russell’s earth boa

19 Eryx johni Boa boa

20 Python molurus bivittatus Indian Python or Rock Python

21 Elaphe radiata Copperhead or Trinket Snake

22 Ptyas mucosus Rat Snake or Dhaman snake

23 Argyrogena fasciolata Banded Racer

24 Spalerosophis diadema Royal or Diadem snake

25 Naja naja Indian Spectacled or Binocellate Cobra

26 Naja naja Indian Nag

27 Bungarus caeruleus Black Krait

28 Bufo stomaticus Marbled toad

29 Euphlyctis cyanophlyctis Skipping g Frog

30 Hoplobatrachus tigerinus Indian Bull Frog

31 Hoplobatrachus crassus Jerdon’s Bull Frog

32 Rana limnocharis Indian Cricket Frog

33 Sphaerotheca breviceps Indian Burrowing Frog

34 Microhyla inornata Burrowing microhylid frog

35 Uperodon systoma Marbled balloon frog

Page 63: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 63 http://www.euroasiapub.org

RESULT Dash & Mahanta (1993) highlighted the need of extensive quantitative ecological studies on

the amphibian communities in the Indian ecosystems. Habitat destruction and alteration was

considered one of the most important factors (Blaustein & Wake, 1990; Khan, 1990; Ghate &

Pandhye, 1996; Ravichandran,1998; Alford & Richards, 1999). Daniels ( 1 9 9 5 , 1999a);

Molur & Walker (1998) highlighted the need of amphibian research and conservation in

India, in terms of amphibians taxonomy, range distribution, ecology and their conservation

requirements .Dash & Mahanta (1993) highlighted the need of extensive quantitative

ecological studies on the amphibian communities in the Indian ecosystems. . Ashley and

Robinson(1996) Observed that the road-kills of herpetofauna are a major cause of mortality

for a wide variety of taxa However, management decisions to implement actions for reducing

losses are based in economic realities, and herpetofauna often are low-profile species. Wilson

,et al. ,(2001)Recomanded that the biodiversity decline is one of the most serious

environmental problems, if not the most serious. Since it is a problem, it cries out for

solutions.

DISCUSSION

River Yamuna receives significantly high amount of organic matter, which is generally,

originates from domestic sources. For biodegradation, this organic waste requires oxygen,

causing significant depletion of dissolved oxygen in river water. The oxygen depletion not

only affects biotic community of the river but also affects its self-purification capacity. The

organic matter after biodegradation release nutrients in the water. High nutrients

concentration leads to Eutrophication, a condition characterized by significant diurnal

variation in dissolved oxygen concentration and excessive algal grown. Presently there is a

barrage in the Yamuna river at Mathura. The barrages have impact on characteristics of

Yamuna river which help forms some sort of reservoir towards upstream. This reservoir acts

as oxidation pond to treat the river water, which is helpful to survive faunal diversity.

Conserving biological diversity Ensuring sustainable use of the natural resource base

Minimizing pollution and wasteful consumption

The knowledge gaps in relation to status, distribution, impacts and institutions related to

biodiversity which need to be addressed for enhanced biodiversity conservation, community

empowerment, effective laws and policies and appropriate development models.

Page 64: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 64 http://www.euroasiapub.org

CONCULSION

The innumerable life forms harboured by the forests, deserts, mountains, other land, air and

oceans provide food, fodder, fuel, medicine, textiles etc. major problems confronting the

wetlands to decrease in biological diversity particularly endemic and endangered species.

Deterioration of water quality. Sedimentation & shrinkage in the areas. Large number of

people living in and around wetlands have been encroaching upon these areas and vast areas

have already been drained for agriculture, urban expansion and other purposes. Siltation is

one of the major problem. Deforestation and other anthropogenic activities have accelerated

soil erosion resulting in increased sedimentation rates and resultant shrinkage of wetlands.

Indiscriminate discharge of industrial/domestic effluent, leachates generated from improperly

disposed industrial solids waste, hazardous waste, municipal solid waste and biomedical

waste in the streams/land or the catchment area of the wetlands, not only deteriorate the water

quality of the system but the Toxic Pollutants i.e. trace heavy metals and trace organics are

absorbed in the biomass .The residual pesticides and fertilizer generated due to excessive use

of the commodities are carried away with rain water run off to the wetlands from the

catchment areas. Decrease in biological diversity particularly endemic and endangered

species. Clear cutting forests, draining wetlands and altering habitat may directly affect

amphibian population (Petranka et al., 1993; Semlitsch, 1998; Ernst & Rodel, ss2005). It

indicated that regional herpeto biodiversity status is in good position but needed

conservation. The study also showed the regular depletion of the herpetofauna. In whole

work it was realized that the common people of the region were not educated about the

necessity of the herprtofauna.

Conservation of wetland areas indiscriminately for aquaculture without proper land use

planning has also resulted in the destruction of a number of wetlands, these wetlands are the

primary requirement of conserve herpetonaul biodiversity. There are innumerable species, the

potential of which is not as yet known. It would therefore be prudent to not only conserve the

species we already have information about, but also species we have not yet identified and

described from economic point of view.

REFRENCES

1. Ashley, E.P., J.T.Robinson (1996). Road mortality of amphibians,reptiles and other

wildlife on the Long Point Causeway, Lake Erie, Ontario. Canadian Field-Naturalist

110, 403–412.

2. Blaustein, A.R. & D.B. Wake (1990).Declining amphibian populations: a

Page 65: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 65 http://www.euroasiapub.org

3. Campbell, H.W., and S.P. Christman. (1982). Field techniques for herpetofaunal

community analysis. Pages 193-200 In N. J. Scott, Jr. (ed.), Herpetological

Communities, U.S.D.I. Fish and Wildlife Service, Wildlife Research Report 13,

Washington, D.C. 239 pp.

4. Corn, P. S., and R. B. Bury. (1990). Sampling Methods for Terrestrial Amphibians

and Reptiles. USDA Forest Service, General and Technical Report PNW-GTR-256,

34

5. Crump, M.L. and N.J. Scott, Jr. (1994). Visual encounter surveys. Pages 84- 92 in

W.R. Heyer, M.A. Donnelly, R.W. McDiarmid, L.C. Hayek, and M.S. Foster, eds.

Measuring and monitoring biological diversity: standard methods for amphibians.

Smithsonian Institution Press. Washington DC.

6. Crump, M.L. and N.J. Scott, Jr. (1994). Visual encounter surveys. Pages 84- 92 in

W.R. Heyer, M.A. Donnelly, R.W. McDiarmid, L.C. Hayek, and M.S. Foster, eds.

Measuring and monitoring biological diversity: standard methods for amphibians.

Smithsonian Institution Press. Washington DC.

7. Daniel, J.C. (1963a). Field guide to the amphibians of western India. Part I.

8. Daniel, J.C. (1963b). Field guide to the amphibians of western India. Part II.Journal of

the Bombay Natural History Society 60(3): 690-702.

9. Dash, M.C. & J.K. Mahanta (1993).Quantitative analysis of the community structure

of tropical amphibian assemblages and its significane to conservation. Journal of

Bioscience 18:

10. Gibbons, J. W. and R. D. Semlitsch. (1981). Terrestrial drift fences with pitfall traps:

an effective technique for quantitative sampling of animal populations. Brimleyana

No. 7:1-16. global phenomenon? TREE 5: 203-204.

11. Molur, S. & S. Walker (Editors) (1998). Report of the Conservation Assessment and

Management Plan (CAMP) Workshop for Amphibians of India (BCPP Endangered

Species Project). ZOO/ CBSG India, Coimbatore, 102pp.

12. Petranka, J. W.M.E. Eldridge & K.E.Haley (1993). Effects of timber harvesting on

southern Appalachian salamanders. Conservation Biology 7:363-370.

13. Sharma, K.K. (2005a). Sonotaxonomy: sound based taxonomy is a novel and

environment friendly approach in systematics. Journal of Cell Tiss. Research 5(3):

1-2.

14. Sharma, K.K. (2005b). Wildlife monitoring by sound analysis system – An authentic

and precise approach in wildlife management. National Conference on Environment

Page 66: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 66 http://www.euroasiapub.org

and Natural Disaster Management, November 28-30, 2005. Dept. of Zoology,

Universityof Rajasthan, Jaipur. Abstract No. IL-10, pp 118.

15. Smith, M. A. (1935). The fauna of British India. Reptiles and Amphibians. Vol. II,

Sauria. Taylor and Francis, London, 305 pp.

16. Wilson L.D, McCranie J.R, and Espinal M.R. (2001)The eco-geography of the

Honduran herpetofauna and the design of biotic reserves. In: Johnson J. D, Webb, R.

G, Flores-Villela, O. A, editors. Mesoamerican herpetology: systematics,

zoogeography, and conservation. University of Texas at El Paso: Centennial

Museum;. pp. 109–158. Special Publication 1:1–200.

Page 67: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 67 http://www.euroasiapub.org

INDIAN STOCK MARKET TREND PREDICTION USING SUPPORT

VECTOR MACHINE M. Suresh Babu*

Dr. N. Geethanjali**

Prof. B. Satyanarayana***

ABSTRACT

Stock return predictability has been a subject of great controversy. The debate followed issues from market efficiency to the number of factors containing information on future stock returns. The analytical tool of support vector regression on the other hand, has gained great momentum in its ability to predict time series in various applications and also in finance (Smola and Schölkopf, 1998). Support vector machines (SVM) are employed to predict stock market dailytrends: ups and downs. The purpose is to examine the effect of macroeconomic information and technical analysis indicators on the accuracy of the classifiers. The construction of a prediction model requires factors that are believed to have some intrinsic explanatory power. These explanatory factors fall largely into two categories: fundamental and technical. Fundamental factors include for example macro economical indicators, which however, are usually only infrequently published. Technical factors are based solely on the properties of the underlying time series and can therefore be calculated at the same frequency as the time series. Since this study applies support vector regression to high frequent data, only technical factors are considered. It is found that macroeconomic information is suitable to predict stock market trends than the use of technical indicators. In addition, the combination of the two sets of predictive inputs does not improve the forecasting accuracy. Furthermore, the prediction accuracy improves when trading strategies are considered. Support vector machine (SVM) is a very specific type of learning algorithms characterized by the capacity control of the decision function, the use of the kernel functions and the sparsity of the solution. In this paper, we investigate the predictability of financial movement direction with SVM by forecasting the weekly movement direction of BSE30 index. To evaluate the forecasting ability of SVM, we compare its performance with those of Linear Discriminant Analysis, Quadratic Discriminant Analysis and Elman Backpropagation Neural Networks. The experiment results show that SVM outperforms the other classification methods. Further, we propose a combining model by integrating SVM with the other classification methods. The combining model performs best among all the forecasting methods. Keywords: Support Vector Machines, Classification, Stock Market, technical indicators. *Principal, Intel Institute of Science, Anantapur, Andhra Pradesh , India.

**Associate Professor, Department of Computer Science, S.K. University, Anantapur. India ***Professor & Chairman, Board of Studies, Department of Computer Science, Sri

Krishnadevaraya Univesity, Anantapur.

Page 68: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 68 http://www.euroasiapub.org

1. INTRODUCTION Forecasting stock market behavior is a very difficult task since its dynamics are complex and

non-linear. For instance, stock return series are generally noisy and may be influenced by

many factors; such as the economy, business conditions, and political events to name a few.

Indeed, empirical finance shows that publicly available data on financial and economic

variables may explain stock return fluctuations in the Indian Stock Market. For instance, a

number of applications have been proposed to forecast stock market returns with

macroeconomic variables with the use of neural networks and Bayesian networks and support

vector machines. On the other hand, technical indicators have been also used to predict stock

market movements using neural networks, adaptive fuzzy inference system, and fuzzy logic.

The literature shows that economic variables and technical indicators have achieved success

in predicting the stock market. However, none of the previous studies have compared the

performance of the economic information and technical indicators in terms of prediction

accuracy.

The financial market is a complex, evolutionary, and non-linear dynamical system. The field

of financial forecasting is characterized by data intensity, noise, non stationary, unstructured

nature, high degree of uncertainty, and hidden relationships. Many factors interact in finance

including political events, general economic conditions, and traders’ expectations. Therefore,

predicting finance market price movements is quite difficult. Increasingly, according to

academic investigations, movements in market prices are not random. Rather, they behave in

a highly non-linear, dynamic manner. The standard random walk assumption of futures prices

may merely be a veil of randomness that shrouds a noisy non-linear process.

Support vector machine (SVM) is a very specific type of learning algorithms characterized by

the capacity control of the decision function, the use of the kernel functions and the sparsity

of the solution. Established on the unique theory of the structural risk minimization principle

to estimate a function by minimizing an upper bound of the generalization error, SVM is

shown to be very resistant to the over fitting problem, eventually achieving a high

generalization performance.

Another key property of SVM is that training SVM is equivalent to solving a linearly

constrained quadratic programming problem so that the solution of SVM is always unique

and globally optimal, unlike neural networks training which requires nonlinear optimization

with the danger of getting stuck at local minima.

Page 69: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 69 http://www.euroasiapub.org

Some applications of SVM to financial forecasting problems have been reported recently. In

most cases, the degree of accuracy and the acceptability of certain forecasts are measured by

the estimates’ deviations from the observed values. For the practitioners in financial market,

forecasting methods based on minimizing forecast error may not be adequate to meet their

objectives. In other words, trading driven by a certain forecast with a small forecast error may

not be as profitable as trading guided by an accurate prediction of the direction of movement.

The goal of this study is to predict stock price movements only from the statistical properties

of the underlying financial time series and to explore the predictability of financial market

movement direction with SVM. Therefore, financial indicators are extracted from the time

series, which are then used by a support vector regression (SVR) to predict market

movement.

2.2.2 Support vector machines

Support Vector Machines (SVM) is a supervised statistical learning technique introduced by

Vapnik. It is one of the standard tools for machine learning successfully applied in many

different real-world problems. For instance, they have been successfully applied in financial

time series trend prediction. The SVM were originally formulated for binary classification.

The SVM seek to implement an optimal marginal classifier that minimizes the structural risk

in two steps. First, SVM transform the input to a higher dimensional space with a kernel

(mapping) function. Second, SVM linearly combine them with a weight vector to obtain the

output. As result, SVM provide very interesting advantages. They avoid local minima in the

optimization process. In addition, they offer scalability and generalization capabilities. For

instance, to solve a binary classification problem in which the output yϵ-1,+1 SVM seek

for a hyper-plane w.Φ xb 0 to separate the data from classes +1 and −1 with a maximal

margin. Here, x denotes the input feature vector, w is a weight vector, Φ is the mapping

function to a higher dimension, and b is the bias used for classification of samples. The

maximization of the margin is equivalent to minimizing the norm of w. Thus, to find w and b,

the following optimization problem is solved:

Minimize : || w ||2 + C Σni=1 ξi

S.t yi (w.Φ xib) ≥ 1 - ξi ξi ≥ 0 i = 1,......,n

where C is a strictly positive parameter that determines the tradeoff between the maximum

margin and the minimum classification error, n is the total number of samples, and

generalization and ξis the error magnitude of the classification.

Page 70: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 70 http://www.euroasiapub.org

The conditions ensure that no training example should be within the margins. The number of

training errors and examples within the margins is controlled by the minimization of the term:

The solution to the previous minimization problem gives the decision frontier:

f(x) = Σ yiαiΦ(xi)Φ(x) + b xi Where each αi is a lagrange coefficient. As mentioned before the role of the kernel function is

to implicitly map the input vector into a high-dimensional feature space to achieve better

separability. In this study the polynomial kernel is used since it is a global kernel. For

instance, global kernels allow data points that are far away from each other to have an

influence on the kernel values as well.

K(x,xi) = Φ(xi) Φ(x) = ((xi.x) + 1)d

where the kernel parameter d is the degree of the polynomial to be used. In this study, d is set

to 2. Finally, the optimal decision separating function can be obtained as follows:

2. THEORY Of SVM IN CLASSIFICATION The indicators are arbitrarily chosen among a high variety of financial indicators. The chosen

indicators include price differences, moving averages, relative strength and so called

stochastic indicators as shown in the figure. These indicators are then preprocessed in the

sense that the mean vector is subtracted and each indicator time series in divided by its

variance in order to receive indicator values with zero mean and unit variance. Before the

SVR model is trained, the parameters of the SVR model are optimized using a cross

validation procedure on a training set. After that, the optimized model is used to predict

financial market movement.

In the process of model selection, models are chosen only on the basis of performance over

out of sample data, in order to avoid the critique of judging the model on the basis of in

sample performance. The model selection is based on a cross validation procedure commonly

used in Data Mining.

Our main results show that stock market prediction based on support vector regression is

significantly outperforming a random stock market prediction. However, the prediction in

average is only correct in 50.69 percent of times with a standard deviation of 0.26 percent.

Page 71: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 71 http://www.euroasiapub.org

We present a basic theory of the support vector machine model. Let D be the smallest radius

of the sphere that contains the data (example vectors). The points on either side of the

separating hyperplane have distances to the hyperplane. The smallest distance is called the

margin of separation. The hyperplane is called optimal separating hyperplane (OSH), if the

margin is maximized. Let q be the margin of the optimal hyperplane. The points that are

distance q away from the OSH are called the support vectors.

Consider the problem of separating the set of training vector belonging to two separate

classes, G = {(xi, yi), i = 1, 2,.....,N} with a hyperplane wT ᵠ(x) + b = 0 (xi ϵ Rn is the ith

input vector, yi ϵ{−1, 1} is known binary target), the original SVM classifier satisfies the

following conditions:

wT ᵠ (xi) + b ≥1 if yi = 1, (1)

wT ᵠ (xi) + b ≤ if yi = −1, (2)

or equivalently,

yi[wT ᵠ (xi) + b] ≥ 1 i = 1, 2........ N, (3)

where ᵠ: Rn → Rm is the feature map mapping the input space to a usually high dimensional

feature space where the data points become linearly separable.

The distance of a point xi from the hyperplane is

(4)

The margin is 2/|w| according to its definition. Hence, we can find the hyperplane that

optimally separates the data by solving the optimization problem:

Min Ø(w) = ½ |w|2 (5)

under the constraints of Eq. (3).

The solution to the above optimization problem is given by the saddle point of the Lagrange

function

(6)

under the constraints of Eq. (3), where αi are the nonnegative Lagrange multipliers. So far

the discussion is restricted to the case where the training data is separable. To generalize the

problem to the non-separable case, slack variable ξi is introduced such that

(7)

Thus, for an error to occur the corresponding ξi must exceed unity, so is an upper

bound on the number of training errors. Hence, a natural way to assign an extra cost for errors

is to change the objective function from Eq. (5) to

Page 72: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 72 http://www.euroasiapub.org

(8)

under the constraints of Eq. (7), where C is a positive constant parameter used to control the

tradeoff between the training error and the margin. In this paper, we choose C =50 based on

our experiment experiences. Similarly, solve the optimal problem by minimizing its Lagrange

function

(9)

under the constraints of Eq. (7), where αi,μi are the non-negative Lagrange multipliers.

The Karush–Kuhn–Tucker (KKT) conditions [16] for the primal problem are

(10)

(11)

(12)

(13)

ξi ≥ 0 (14)

αi ≥ 0, (15)

μ ≥ 0, (16)

(17)

µi ξi = 0 (18)

Hence,

(19)

We can use the KKT complementarily conditions, Eqs. (17) and (18), to determine b. Note

that Eq. (12) combined with Eq. (18) shows that ξj = 0 if αj < C. Thus we can simply take any

training data for which 0< αj < c to use Eq. (17) (with ξj = 0) to compute b.

b= yj – wT ᵠ(xj) (20)

It is numerically reasonable to take the mean value of all b resulting from such computing.

Hence,

(21)

where Ns is the number of the support vectors.

For a new data x, the classification function is then given by

f(x) = Sign(wT ᵠ(x) + b) (22)

Page 73: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 73 http://www.euroasiapub.org

Substituting Eqs. (19) and (21) into Eq. (22), we get the final classification function

(23)

If there is a kernel function such that K(xi,xj)=ᵠ(xi)T ᵠ(xj), it is usually unnecessary to

explicitly Know what ᵠ(x) is, and we only need to work with a kernel function in the training

algorithm.

Therefore, the non-linear classification function is

(24)

Any function satisfying Mercer’s condition [17] can be used as the kernel function. In this

investigation, the radial kernel K(s, t) = exp(−1/10 ||s-t||2 ) is used as the kernel function of the

SVM because the radial kernel tends to give good performance under general smoothness

assumptions. Consequently, it is especially useful if no additional knowledge of the data is

available.

3. Experiment design

Several financial indicators are calculated in order to reduce dimensionality of the time

series:

: The relative price difference of prices p(t) at time t and p(t-1) at

time t-1

: The exponential moving average of the prices p(t)

: The relative strength indicator of the number of upward

movement U[t − n;t] and downward movements D[t − n;t] in the period of t-n until time t

: The stochastic indicator of the stock price p(t), lowest stock

price L[t − n;t] and highest stock price H[t − n;t] in the period of t-n until time t.

The figure illustrates some of the properties of the indicators derived as above from a random

time series. The kernel densities are estimated for each indicator with a bandwidth of 0.001.

Note, that the RDP and EMA indicator are rather gaussian distributed, while the RSI and

Stochastic indicators have several modes and especially the Stochastic indicator seems to be a

mixture of two different gaussian distributions.

In our empirical analysis, we set out to examine the weekly changes of the BSE30 Index. The

BSE30 Index is calculated and disseminated. It measures the composite price performance of

225 highly capitalized stocks trading on the Bombay Stock Exchange (BSE), representing a

Page 74: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 74 http://www.euroasiapub.org

broad cross-section of Indian industries. Trading in the index has gained unprecedented

popularity in major financial markets around the world. Futures and options contracts on the

BSE30 Index are currently traded on the Securities and Exchange Board of India (SEBI), the

National Stock Exchange (NSE). The increasing diversity of financial instruments related to

the BSE30 Index has broadened the dimension of global investment opportunity for both

individual and institutional investors. There are two basic reasons for the success of these

index trading vehicles. First, they provide an effective means for investors to hedge against

potential market risks. Second, they create new profit making opportunities for market

speculators and arbitrageurs. Therefore, it has profound implications and significance for

researchers and practitioners alike to accurately forecast the movement direction of BSE30

Index.

3. MODEL INPUTS SELECTION Most of the previous researchers have employed multivariate input. Several studies have

examined the cross sectional relationship between stock index and macroeconomic variables.

The potential macroeconomic input variables which are used by the forecasting models

include term structure of interest rates (TS), short-term interest rate (ST), long term interest

rate (LT), consumer price index (CPI), industrial production (IP), government consumption

(GC), private consumption (PC), gross national product (GNP) and gross domestic product

(GDP). However, Indian interest rate has dropped down to almost zero since 1990. Other

macroeconomic variables weekly data are not available for our study.

Indian consumption capacity is limited in the domestic market. The economy growth has a

close relationship with Indian export. The largest export target for India is the United States

of America (USA), which is the leading economy in the world. Therefore, the economic

condition of USA influences Indian economy, which is well represented by the BSE30 Index.

As the BSE30 Index to Indian economy, the S& P 500 Index is a well-known indicator of the

economic condition in USA. Hence, the S& P 500 Index is selected as model input. Another

import factor that affects the Indian export is the exchange rate of US Dollars against Indian

Rupee (Rs), which is also selected as model input. The prediction model can be written as the

following function:

Directiont = F(St-1 S&P500 , St-1 IND), (25)

where St-1 S&P500 and St-1 IND are first order difference natural logarithmic transformation to

the raw S& P 500 index and IND at time t−1, respectively. Such transformations implement

an effective detrending of the original time series.

Page 75: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 75 http://www.euroasiapub.org

Fig. 1. First-order difference natural logarithmic weekly prices of BSE Index, S& P 500 Index. (observations from October

2010 to September 2011).

Directiont is a categorical variable to indicate the movement direction of BSE30 Index at time t.

If BSE30 Index at time t is larger than that at time t − 1, Directiont is 1. Otherwise, Directiont is −1.

The above model inputs selection is only based on a macroeconomic analysis. As shown in Fig. 1, the

behaviours of the BSE30 Index, the S& P 500 Index are very complex. It is impossible to give an

explicit formula to describe the underlying relationship between them.

3.1. Data collection

We obtain the historical data from the finance section of Yahoo and the Bombay Stock

Exchange and National Stock Exchange respectively. The whole data set covers the period

from January 1, 2007 to December 31, 2010, a total of 694 pairs of observations. The data set

is divided into two parts. The first part (652 pairs of observations) is used to determine the

specifications of the models and parameters. The second part (42 pairs of observations) is

reserved for out-of-sample evaluation and comparison of performances among various

forecasting models.

3.2.Comparisons with other forecasting methods

To evaluate the forecasting ability of SVM, we use the random walk model (RW) as a

benchmark for comparison. RW is a one step ahead forecasting method, since it uses the

current actual value to predict the future value as follows:

yˆt+1=yt, (26)

where yt is the actual value in the current period t and yˆt+1 is the predicted value in the next

period.

We also compare the SVM’s forecasting performance with that of linear discriminant

analysis (LDA), quadratic discriminant analysis (QDA) and Elman backpropagation neural

Page 76: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 76 http://www.euroasiapub.org

networks (EBNN). LDA can handle the case in which the within class frequencies are

unequal and its performance has been examined on randomly generated test data. This

method maximizes the ratio of between-class variance to the within-class variance in any

particular data set, thereby guaranteeing maximal separability.

QDA is similar to LDA, only dropping the assumption of equal covariance matrices.

Therefore, the boundary between two discrimination regions is allowed to be a quadratic

surface (for example, ellipsoid, hyperboloid, etc.) in the maximum likelihood argument with

normal distributions. In this chapter, we derive a linear discriminant function of the form:

L(St-1 s&p500, St-1 IND) = a0 + a1 St-1 s&p500 + a2 St-1 IND) (27)

and a quadratic discriminant function of the form:

Q((St-1s&p500,St-1

IND) = a + P((St-1s&p500,St-1

IND)T+ ((St-1s&p500,St-1

IND)T((St-1s&p500, St-1 IND)T,

(28)

where a0, a1, a2, a, P,T are coefficients to be estimated.

Elman Backpropagation Neural Network is a partially recurrent neural network. The

connections are mainly feed forward but also include a set of carefully chosen feedback

connections that let the network remember cues from the recent past. The input layer is

divided into two parts: the true input units and the context units that hold a copy of the

activations of the hidden units from the previous time step. Therefore, network activation

produced by past inputs can cycle back and affect the processing of future inputs.

3.3. A combining model

Given a task that requires expert knowledge to perform, k experts may be better than one if

their individual judgments are appropriately combined. Based on this idea, predictive

performance can be improved by combining various methods. Therefore, we propose a

combining model by integrating SVM with other classification methods as follows:

(29)

where wi is the weight assigned to classification method i, We would like to

determine the weight scheme based on the information from the training phase. Under this

strategy, the relative contribution of a forecasting method to the final combined score

depends on the in sample forecasting performance of the learned classifier in the training

phase. Conceptually, a well-performed forecasting method should be given a larger weight

than the others during the score combination. In the investigation, we adopt the weight

scheme as follows:

Page 77: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 77 http://www.euroasiapub.org

(30)

where ai is the in sample performance constructed by forecasting method i.

Table 1 Forecasting performance of different classification methods

Classification method Hit ratio (%)

RW 50

LDA 55

QDA 69

EBNN 69

SVM 73

Combining model 75

Table 2

Covariances matrices of input variables when Directiont = -1

SINDt-1 St-1

S&p500

SINDt-1 0.00015167706 0.00002147347

St-1 S&p500 0.00002147347 0.00044862762

4. EXPERIMENT RESULTS Each of the forecasting models described in the last section is estimated and validated by in

sample data. The model estimation selection process is then followed by an empirical

evaluation which is based on the out-sample data. At this stage, the relative performance of

the models is measured by hit ratio. Table 1 shows the experiment results.

RW performs worst, producing only 50% hit ratio. RW assumes not only that all historic

information is summarized in the current value, but also that increments–positive or

negative—are uncorrelated (random), and balanced, that is, with an expected value equal to

zero. In other words, in the long run there are as many positive as negative fluctuations

making long term predictions other than the trend impossible.

SVM has the highest forecasting accuracy among the individual forecasting methods. One

reason that SVM performs better than the earlier classification methods is that SVM is

designed to minimize the structural risk, whereas the previous techniques are usually based

on minimization of empirical risk. In other words, SVM seeks to minimize an upper bound of

the generalization error rather than minimizing training error. So SVM is usually less

vulnerable to the over fitting problem.QDA out performs LDA in term of hit ratio, because

LDA assumes that all the classes have equal covariance matrices, which is not consistent with

Page 78: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 78 http://www.euroasiapub.org

the properties of input variable belonging to different classes as shown in Tables 2 and 3. In

fact, the two classes have different covariance matrices. Heteroscedastic models are more

appropriate than homoscedastic models.

The integration of SVM and the other forecasting methods improves the forecasting

performance. Different classification methods typically have access to different information

and therefore produce different forecasting results. Given this, we can combine the individual

forecaster’s various information sets to produce a single superior information set from which

a single superior forecast could be produced. Table 3 : Covariances matrices of input variables when Directiont = 1

SINDt-1 St-1

S&p500

SINDt-1 0.00018240800 -0.00002932242

St-1 S&p500 -0.00002932242 0.00044571885

The method of support vector regression includes several parameters to be chosen, which can

e.g. optimized using cross validation.

These parameter include the chosen kernel with parameter γ, the e of the e-insensitive loss

function, the cost of error c and the number of training samples. The advantage of using a

kernel is sometimes to be able to linearly classify inseparable cases like shown on the top of

the figure. In this case, the black and white label points on the left side are not linearly

separable. After the kernel transformation, however, the black and white labelled points

might fall onto the same point in the new space. Here, the classification problem becomes

trivial. Therefore choosing a kernel is high importance, as well as the parameter γ of the

kernel function.

Another parameter is the e of the insensitive loss function, which is illustrated on the bottom

of the figure. The support vector regression model is trained placing a penalty for values,

which are off target. The penalty depends on the e-insensitive loss function, with parameter e.

The idea is to penalize values off target only if the difference is higher than the absolute value

of e.

Given the kernel K(xi,xj) = φ(xi)Tφ(xj), the training set of instance-label pairs (xi,yi),i = 1,...,l,

where xi ϵ Rn and yi ϵ 1, -1l , the optimization problem of the support vector machines can

be formulated as min subject to yi (wT Ø (xi) + b) ≤ 1- Ei, Ei ≤ 0.

The support vector machine then maximizes the margin of the separating hyperplane of the

classes, which is equal to minimizing |w|/|t| and therefore also to minimizing |w|2 / 2.

Page 79: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 79 http://www.euroasiapub.org

Cross-validation

Figure 4: Cross-validation setup. Several parameter values are tested in the prediction accuracy on a training set,

of which then the optimal parameter combination is chosen for further prediction on the test set. Since the SVR parameters can be easily controlled manually, the optimal set of parameters is

chosen on a test set and then used on the following training set. The cross-validation is

applied as illustrated in the figure. The total data set is divided into two parts, one for cross-

validation and one for testing. A third part of the data set in order to optimize the structure of

the model, like the used indicators, is omitted in this study.

In order to optimize the number of training samples, the cost of error c, the kernel parameter

and the parameter e of the e-insensitive loss-function, a k-fold cross-validation is used as

follows: the dataset is divided into k folders of equal size; subsequently, a model is built on

all possible (k) combinations of k-1 folders, and each time the remaining one folder is used

for validation. The best model is the one that performs best on average over the k validation

folders. The benefit of using a cross-validation procedure is that by construction it ensures

that model selection is based entirely on out-of-sample rather than in-sample performance.

Thus, the search for the best Support Vector Regression model is immune to a critique of

drawing conclusions about the merits of a factor model based on its in-sample performance.

In this study, a 10-fold cross-validation procedure was used for each parameter above. In

each validation loop, different values for each parameter are chosen, while the other

parameters are set constant. Then the SVR model is trained with this set of parameters and

the prediction accuracy is calculated. This is done for all parameter combinations and then

the combination with the maximal prediction accuracy chosen.

Page 80: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 80 http://www.euroasiapub.org

Basic model

Figure 5: The basic model. The machine is trained on the past values of the indicators. The resulting model is

used to predict the movement on the next day (= 108 data points). After that the model is shifted and proceeds

again.

The basic simulation consists of two steps: First, at month t, all historical values for all

explanatory factors together with the difference in returns for the period’s t - n1 till t - 1 are

used to build numerous support vector regressions. Thus the dependent variable is the return

of the stock in the period of t till t + n2. The variable n2 is arbitrarily chosen to 108, in order

to decrease calculation time. The independent variables are the technical indicators as

described above.

Second, once the prediction is calculated, the model is shifted 108 data points and the model

is build again in order to predict the next 108 stock price movements.

Using only historically available data ensures the implementation of the trading strategy is

carried out without the benefit of foresight, in the sense that investment decisions are not

based on data that have become available after any of the to-be-predicted periods. Moreover,

investment decisions for the to-be-predicted months are always based on the entire factor set

of historical data, ensuring that no variable-selection procedures based on extensive

manipulation of the whole available data have been carried out. At any rate, the utilized

cross-validation procedure for model selection ensures that the best candidate model is

selected on the basis of performance in the training set and not on the basis of performance

on external validation samples.

Results and discussion

The data set consists of 5 minute closing prices p(t) for 28 stocks in the BSE Sensex. The

missing stocks are Satyam and Hypo Real Estate due to data unavailability. With a time

frame of nearly 7 years between April 2004 and August 2010, the data set comprises 140.000

data points per stock.

Page 81: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 81 http://www.euroasiapub.org

From this data set, the log return of each stock i is calculated as

with price p(t) at time t as well as the market average as

over all stocks i. From this the log return above market is calculated as

xi'(t) = xi(t) − xmarket(t) for each stock i.

Cross-validation

Several parameter values are chosen for each of the machine parameters. The cost and

training length parameter show linear dependencies, while the kernel parameter gamma

shows a quadratic dependency. The e parameter is rather nonlinear dependent to the

prediction accuracy. Several parameter conditions were tested on the first half of the data set.

The figure shows the tested values for each parameter. The optimality criterion used here, is

the cumulated return. Therefore, the model is trained with the parameter set, the prediction

calculated and then the return resulting from the prediction is cumulated over time. The

parameter values are tested on half of the data set, that is between May 2009 and July 2011.

On the top left of the figure, the results for different parameter values of the cost function is

shown. With an increasing cost function value, the cumulated return increases. This seems

plausible, since with an increasing cost the model is trained longer. However, the parameter

exploration is stopped at a cost value of 1000, since higher values increase computation time

dramatically.

The top right of the figure shows different parameter values for e of the e-insensitive loss-

function. Here the results seem to be rather nonlinearly related to the cumulated return, since

with increasing parameter e, the cumulated return decreases only in general. However,

generally, smaller values of e seem to be more successful. Since this value controls the

penalty of the training algorithm, a small value indicates a fast penalty for values off-target.

The kernel parameter gamma, plotted for different values on the bottom left, seems to

approach an optimum value around 1. The parameter controls the shape of the kernel. With

high parameter values, the kernel becomes rather flat and the model increasingly predicts

future movements only linearly, which is obviously insufficient. With small parameter

values, the kernel becomes very thin and training data are increasingly over fitted with

decreasing generalization performance. This again results in a low prediction

performance.Last, with an increasing number of training points the prediction performance

Page 82: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 82 http://www.euroasiapub.org

increases. Therefore the quality of the trained model increases with the number of training

samples.

Prediction accuracy

The optimized parameters were tested with the basic model approach described above on the

second half of the data set. The prediction accuracy over all 28 stocks reached a mean of

50.69 percent with standard deviation of .26%. With this performance, the reported approach

significantly outperformed a random prediction approach. Even if a gain of .69 percent might

be a valuable trading prediction, this approach is market neutral and operated only on the

basic statistical properties of market movements.

5. CONCLUSIONS In this Chapter, we study the use of support vector machines to predict financial movement

direction. SVM is a promising type of tool for financial forecasting. As demonstrated in our

empirical analysis, SVM is superior to the other individual classification methods in

forecasting weekly movement direction of BSE30 Index. This is a clear message for financial

forecasters and traders, which can lead to a capital gain. However, each method has its own

strengths and weaknesses. Thus, we propose a combining model by integrating SVM with

other classification methods. The weakness of one method can be balanced by the strengths

of another by achieving a systematic effect. The combining model performs best among all

the forecasting methods.

The underlying time series were derived from the Bombay Stock Exchange Index. The

support vector machine was then trained in order to predict the movement of 28 stocks of the

index against market. Features for training were directly extracted from the statistical

properties of the time series and no fundamental information was used.

The model selection was based on the performance on out-of-sample data, in order to avoid

critique of foresight and was performed as cross-validation. The main result of this study is

that the movement of stocks is significantly predicted only using technical indicators with

support vector regression.

6. REFERENCES 1. Cristianini N, Taylor JS. An introduction to support vector machines and other kernel-

based learning methods. New York: Cambridge University Press; 2000.

2. Cao LJ, Tay FEH. Financial forecasting using support vector machines. Neural

Computing Applications 2001;10: 184–92.

Page 83: IJREAS_dec2011

IJREAS Volume 1, Issue 4 (December 2011) ISSN: 2294-3905

International Journal of Research in Engineering & Applied Sciences 83 http://www.euroasiapub.org

3. Tay FEH, Cao LJ. Application of support vector machines in financial time series

forecasting. Omega 2001;29:309–17.

4. Castanias R.P. Macro information and the Variability of Stock Market Prices. Journal

of Finance 34 (1979), pp.439–

5. Schwert G William. The Adjustment of Stock Prices to Information about Inflation.

The Journal of Finance 36 (1981), pp.15-29.

6. Schwert G William. Stock Returns and Real Activity: A Century of Evidence. Journal

of Finance 14 (1990), pp.1237-1257.

7. Fama EF. Stock Returns, Real Activity, Inflation and Money. American Economic

Review 71 (1981), pp.71:545

8. Nai-Fu Chen, Roll R, Ross R. Economic Forces and The Stock Market. Journal of

Business 59 (1986), pp.383-403.

9. Hardouvelis Gikas A. Macroeconomic Information and Stock Prices. Journal of

Economics and Business 1987;39:131-140.

10. Darrat AF. Stock Returns, Money and Fiscal Deficits. Journal of Financial and

Quantitative Analysis 25 (1990), pp.387–98.

11. Blank SC. Chaos in futures market? a nonlinear dynamical analysis. Journal of

Futures Markets 1991;11:711–28.

12. DeCoster GP, Labys WC, Mitchell DW. Evidence of chaos in commodity futures

prices. Journal of Futures Markets 1992;12:291–305.

13. Frank M, Stengos T. Measuring the strangeness of gold and silver rates of return. The

Review of Economic Studies 1989;56:553–67.

14. Frank M, Stengos T. Measuring the strangeness of gold and silver rates of return. The

Review of Economic Studies 1989;56:553–67.

15. Vapnik VN. Statistical learning theory. New York: Wiley; 1998.

16. Vapnik VN. An overview of statistical learning theory. IEEE Transactions of Neural

Networks 1999;10:988–99.