Chapter 3 Neural Network based MPPT 30 NEURAL NETWORK BASED MAXIMUM POWER POINT TRACKING 3.1 Introduction This chapter introduces concept of neural networks, it also deals with a novel approach to track the maximum power continuously from PV array using neural networks. A MPPT control algorithm is trained using neural networks and the simulation results are presented. 3.2 Introduction about neural networks What are Neural Networks? Neural Networks (NNs) are networks of neurons, for example, as found in real (i.e. biological) brains. Artificial Neurons are crude approximations of the neurons found in brains. They may be physical devices, or purely mathematical constructs. Artificial Neural Networks (ANNs) are networks of Artificial Neurons, and hence constitute crude approximations to parts of real brains. They may be physical devices, or simulated on conventional computers. From a practical point of view, an ANN is just a parallel computational system consisting of many simple processing elements connected together in a specific way in order to perform a particular task. One should never lose sight of how crude the approximations are, and how over-simplified our ANNs are compared to real brains. Why are Artificial Neural Networks worth studying? They are extremely powerful computational devices (universal computers). Massive parallelism makes them very efficient They can learn and generalize from training data – so there is no need for enormous feats of programming.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Chapter 3 Neural Network based MPPT
30
NEURAL NETWORK BASED MAXIMUM POWER POINT
TRACKING
3.1 Introduction
This chapter introduces concept of neural networks, it also deals with a novel
approach to track the maximum power continuously from PV array using neural networks.
A MPPT control algorithm is trained using neural networks and the simulation results are
presented.
3.2 Introduction about neural networks
� What are Neural Networks?
� Neural Networks (NNs) are networks of neurons, for example, as found in real (i.e.
biological) brains.
� Artificial Neurons are crude approximations of the neurons found in brains. They
may be physical devices, or purely mathematical constructs.
� Artificial Neural Networks (ANNs) are networks of Artificial Neurons, and hence
constitute crude approximations to parts of real brains. They may be physical
devices, or simulated on conventional computers.
� From a practical point of view, an ANN is just a parallel computational system
consisting of many simple processing elements connected together in a specific way
in order to perform a particular task.
� One should never lose sight of how crude the approximations are, and how
over-simplified our ANNs are compared to real brains.
� Why are Artificial Neural Networks worth studying?
� They are extremely powerful computational devices (universal computers).
� Massive parallelism makes them very efficient
� They can learn and generalize from training data – so there is no need for enormous
feats of programming.
Chapter 3 Neural Network based MPPT
31
� They are particularly fault tolerant – this is equivalent to the “graceful degradation”
found in biological systems.
� They are very noise tolerant – so they can cope with situations where normal
symbolic systems would have difficulty.
� In principle, they can do anything a symbolic/logic system can do, and more.
(In practice, getting them to do it can be rather difficult…).
� What are Artificial Neural Networks used for?
As with the field of AI in general, there are two basic goals for neural network
research:
� Brain modeling: The scientific goal of building models of how real brains work. This
can potentially help us understand the nature of human intelligence, formulate better
teaching strategies, or better remedial actions for brain damaged patients.
� Artificial System Building: The engineering goal of building efficient systems for
real world applications. This may make machines more powerful, relieve humans of
tedious tasks, and may even improve upon human performance.
These should not be thought of as competing goals. We often use exactly the same networks
and techniques for both. Frequently progress is made when the two approaches are allowed
to feed into each other. There are fundamental differences though, e.g. the need for
biological plausibility in brain modeling, and the need for computational efficiency in
artificial system building.
3.3 A framework for distributed representation
An artificial neural network consists of a pool of simple processing units which
communicate by sending signals to each other over a large number of weighted connections.
A set of major aspects of a parallel distributed model can be distinguished as follows:
� A set of processing units ('neurons,' 'cells');
� A state of activation yk for every unit, which is equivalent to the output of the unit;
� Connections between the units. Generally each connection is defined by a weight wjk
which determines the effect which the signal of unit j has on unit k;
Chapter 3 Neural Network based MPPT
32
� A propagation rule, which determines the effective input sk of a unit from its external
inputs;
� An activation function Fk, which determines the new level of activation based on the
effective input sk (t) and the current activation yk (t) (i.e., the update);
� An external input (bias, offset) θk for each unit;
� A method for information gathering (the learning rule);
� An environment within which the system must operate, providing input signals and-
if necessary-error signals.
Figure 3.1 illustrates these basics, some of which will be discussed in the next sections.
Fig. 3.1: The basic components of an artificial neural network
3.3.1 Processing units
Each unit performs a relatively simple job: receive input from neighbours or external
sources and use this to compute an output signal which is propagated to other units. Apart
from this processing, a second task is the adjustment of the weights. The system is
inherently parallel in the sense that many units can carry out their computations at the same
time.
Within neural systems it is useful to distinguish three types of units: input units
which receive data from outside the neural network, output units which send data out of the
neural network, and hidden units whose input and output signals remain within the neural
network.
Chapter 3 Neural Network based MPPT
33
During operation, units can be updated either synchronously or asynchronously.
With synchronous updating, all units update their activation simultaneously; with
asynchronous updating, each unit has a (usually fixed) probability of updating its activation
at a time t, and usually only one unit will be able to do this at a time. In some cases the latter
model has some advantages.
3.3.2 Connections between units
In most cases we assume that each unit provides an additive contribution to the input
of the unit with which it is connected. The total input to unit k is simply the weighted sum of
the separate outputs from each of the connected units plus a bias or offset term θk:
(3.1)
The contribution for positive wjk is considered as an excitation and for negative wjk as
inhibition. In some cases more complex rules for combining inputs are used, in which a
distinction is made between excitatory and inhibitory inputs.
3.3.3 Activation and output rules
We also need a rule which gives the effect of the total input on the activation of the
unit. We need a function Fk which takes the total input sk(t) and the current activation yk(t)
and produces a new value of the activation of the unit k:
(3.2)
Often, the activation function is a non-decreasing function of the total input of the unit:
(3.3)
although activation functions are not restricted to non-decreasing functions. Generally, some
sort of threshold function is used: a hard limiting threshold function (a sgn function), or a
linear or semi-linear function, or a smoothly limiting threshold (figure 3.2). For this
smoothly limiting function often a sigmoid (S-shaped) function like:
Chapter 3 Neural Network based MPPT
34
(3.4)
is used. In some applications a hyperbolic tangent is used, yielding output values in the
range [-1, +1].
Fig. 3.2: Various activation functions for a unit.
3.3.4 Network topologies
In the previous section we discussed the properties of the basic processing unit in an
artificial neural network. This section focuses on the pattern of connections between the
units and the propagation of data.
As for this pattern of connections, the main distinction we can make is between:
� Feed-forward networks, where the data flow from input to output units is strictly
feed forward. The data processing can extend over multiple (layers of) units, but no
feedback connections are present, that is, connections extending from outputs of
units to inputs of units in the same layer or previous layers.
� Recurrent networks that do contain feedback connections. Contrary to feed-forward
networks, the dynamical properties of the network are important. In some cases, the
activation values of the units undergo a relaxation process such that the network will
evolve to a stable state in which these activations do not change anymore. In other
applications, the changes of the activation values of the output neurons are
significant, such that the dynamical behavior constitutes the output of the network.
Chapter 3 Neural Network based MPPT
35
3.3.5 Paradigms of learning
We can categorize the learning situations in two distinct sorts. These are:
� Supervised learning or Associative learning in which the network is trained by
providing it with input and matching output patterns. These input-output pairs can be
provided by an external teacher, or by the system which contains the network (self-
supervised).
� Unsupervised learning or Self-organisation in which an (output) unit is trained to
respond to clusters of pattern within the input. In this paradigm the system is
supposed to discover statistically salient features of the input population. Unlike the
supervised learning paradigm, there is no a priori set of categories into which the
patterns are to be classified; rather the system must develop its own representation of
the input stimuli.
3.3.6 Modifying patterns of connectivity
Both learning paradigms discussed above result in an adjustment of the weights of
the connections between units, according to some modification rule. Virtually all learning
rules for models of this type can be considered as a variant of the Hebbian learning rule
suggested by Hebb in his classic book Organization of Behaviour (1949) (Hebb, 1949). The
basic idea is that if two units j and k are active simultaneously, their interconnection must be
strengthened. If j receives input from k, the simplest version of Hebbian learning prescribes
to modify the weight wjk with:
(3.5)
where γ is a positive constant of proportionality representing the learning rate. Another
common rule uses not the actual activation of unit k but the difference between the actual
and desired activation for adjusting the weights:
(3.6)
in which dk is the desired activation provided by a teacher. This is often called the Widrow-
Hoff rule or the delta rule.
Chapter 3 Neural Network based MPPT
36
3.4 STRUCTURES OF ARTIFICIAL NEURAL NETWORK
3.4.1 Network Models
The interconnection of artificial neurons results in neural networks, NNW (often
called neurocomputer or connectionist system in literature),and its objective is to emulate
the function of a human brain in a certain domain to solve scientific, engineering, and many
other real-life problems. The structure of biological neural network is not well-understood,
and therefore, many NNW models have been proposed. A few NNW models can be listed