ACKNOWLEDGEMENT First and foremost, we record our sincere thanks to Almighty GOD and our beloved parents who provided us this chance during our tenure in college. We are grateful to our college & Dr.PRINCIPAL NAME M.E, PhD,our beloved principal. We are also thankful to Mrs.HOD NAME B.Tech , Head of the Department of Computer Science And Engineering for providing the necessary facilities during the execution of our project work. We also thank for her valuable suggestions, advice, guidance and constructive ideas in each and every step, which was indeed a great need towards the successful completion of the project. This project would not have been a success without my Internal guide. So, I would extend my deep sense of gratitude to my Internal guide Ms. GUIDE NAME B.Tech., for the effort she took in guiding me in all the stages of completion of my project work..
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
ACKNOWLEDGEMENT
First and foremost, we record our sincere thanks to Almighty GOD and our
beloved parents who provided us this chance during our tenure in college. We are
grateful to our college & Dr.PRINCIPAL NAME M.E, PhD,our beloved
principal.
We are also thankful to Mrs.HOD NAME B.Tech , Head of the Department
of Computer Science And Engineering for providing the necessary facilities during
the execution of our project work. We also thank for her valuable suggestions,
advice, guidance and constructive ideas in each and every step, which was indeed a
great need towards the successful completion of the project.
This project would not have been a success without my Internal guide. So, I
would extend my deep sense of gratitude to my Internal guide Ms. GUIDE NAME
B.Tech., for the effort she took in guiding me in all the stages of completion of my
project work..
We are very much indebted to our external guide Mr.XXXX B.E, project
guide of “COMPANY NAME “for relentlessly supporting us with technical
guidance throughout our project work .
PROJECT MEMBER1
PROJECT MEMEBER2
ABSTRACT
We propose a personalized mobile search engine, PMSE, that captures the users’
preferences in the form of concepts by mining their click through data. Due to the
importance of location information in mobile search, PMSE classifies these
concepts into content concepts and location concepts. In addition, users’ locations
(positioned by GPS) are used to supplement the location concepts in PMSE. The
user preferences are organized in an ontology-based, multi-facet user profile,
which are used to adapt a personalized ranking function for rank adaptation of
future search results. To characterize the diversity of the concepts associated with a
query and their relevances to the users need, four entropies are introduced to
balance the weights between the content and location facets. Based on the client-
server model, we also present a detailed architecture and design for
implementation of PMSE. In our design, the client collects and stores locally the
click-through data to protect privacy, whereas heavy tasks such as concept
extraction, training and re-ranking are performed at the PMSE server. Moreover,
we address the privacy issue by restricting the information in the user profile
exposed to the PMSE server with two privacy parameters. We prototype PMSE on
the Google Android platform. Experimental results show that PMSE significantly
improves the precision comparing to the baseline.
ORGANIZATION PROFILE
COMPANY PROFILE:
Founded in 2009, JP iNFOTeCH located at Puducherry, has a rich background in developing academic student projects, especially in solving latest IEEE Papers, Software Development and continues its entire attention on achieving transcending excellence in the Development and Maintenance of Software Projects and Products in Many Areas.
In Today's Modern Technological Competitive Environment, Students in Computer Science Stream Want To Ensure That They Are Getting Guidance In An Organization That Can Meet Their Professional Needs. With Our Well Equipped Team of Solid Information Systems Professionals, Who Study, Design, Develop, Enhance, Customize, Implement, Maintain and Support Various Aspects Of Information Technology, Students Can Be Sure.
We Understand The Students’ Needs, And Develop Their Quality Of Professional Life By Simply Making The Technology Readily Usable For Them. We Practice Exclusively in Software Development, Network Simulation, Search Engine Optimization, Customization And System Integration. Our Project Methodology Includes Techniques For Initiating A Project, Developing The Requirements, Making Clear Assignments To The Project Team,
Developing A Dynamic Schedule, Reporting Status To Executives And Problem Solving.
The indispensable factors, which give the competitive advantages over others in the market, may be slated as:
As a team we have the clear vision and realize it too. As a statistical evaluation, the team has more than 40,000 hours of expertise in providing real-time solutions in the fields of Android Mobile Apps Development, Networking, Web Designing, Secure Computing, Mobile Computing, Cloud Computing, Image Processing And Implementation, Networking With OMNET++ Simulator, client Server Technologies in Java,(J2EE\J2ME\EJB), ANDROID, DOTNET (ASP.NET, VB.NET, C#.NET), MATLAB, NS2, SIMULINK, EMBEDDED, POWER ELECTRONICS, VB & VC++, Oracle and operating system concepts with LINUX.
OUR VISION:
“Impossible as Possible” this is our vision; we work according to our vision.
LIST OF TABLES
TABLE NO TABLE NAME PAGE NO
LIST OF FIGURES
FIGURE NO FIGURE NAME PAGE NO
LIST OF ABBREVIATIONS
ADO - Active data Objects
SQL - Standard Query Language
ASP - Active Server Page
IIS - Internet Information Services
CLR - Common Language Runtime
IL - Intermediate Language
XML - Extended Markup Language
ISP - Internet Service Provider
VLSI - Very Large Scale Integration
MSIDE - Microsoft Integrated Development Environment
NGWS - Next Generation Window Service
CHAPTER 1- INTRODUCTION
1.1 Introduction
A major problem in mobile search is that the interactions between the users and
search engines are limited by the small form factors of the mobile devices. As a
result, mobile users tend to submit shorter, hence, more ambiguous queries
compared to their web search counterparts. In order to return highly relevant
results to the users, mobile search engines must be able to profile the users’
interests and personalize the search results according to the users’ profiles. A
practical approach to capturing a user’s interests for personalization is to analyze
the user’s clickthrough data [5], [10], [15], [18]. Leung, et. al., developed a search
engine personalization method based on users’ concept preferences and showed
that it is more effective than methods that are based on page preferences [12].
However, most of the previous work assumed that all concepts are of the same
type. Observing the need for different types of concepts, we present in this paper a
personalized mobile search engine, PMSE, which represents different types of
concepts in different ontologies. In particular, recognizing the importance of
location information in mobile search, we separate concepts into location concepts
and content concepts. For example, a user who is planning to visit Japan may issue
the query “hotel”, and click on the search results about hotels in Japan. From the
clickthroughs of the query ”hotel”, PMSE can learn the user’s content preference
(e.g., “room rate” and “facilities”) and location preferences (“Japan”).
Accordingly, PMSE will favor results that are concerned with hotel information in
Japan for future queries on “hotel”. The introduction of location preferences offers
PMSE an additional dimension for capturing a user’s interest and an opportunity to
enhance search quality for users. To incorporate context information revealed by
user mobility, we also take into account the visited physical locations of users in
the PMSE. Since this information can be conveniently obtained by GPS devices, it
is hence referred to as GPS locations. GPS locations play an important role in
mobile web search. For example, if the user, who is searching for hotel
information, is currently located in “Shinjuku, Tokyo”, his/her position can be used
to personalize the search results to favor information about nearby hotels. Here, we
can see that the GPS locations (i.e., “Shinjuku, Tokyo”) help reinforcing the user’s
location preferences (i.e., “Japan”) derived from a user’s search activities to
provide the most relevant results. Our proposed framework is capable of
combining a user’s GPS locations and location preferences into the personalization
process. To the best of our knowledge, our paper is the first to propose a
personalization framework that utilizes a user’s content preferences and location
preferences as well as the GPS locations in personalizing search results. In this
paper, we propose a realistic design for PMSE by adopting the metasearch
approach which replies on one of the commercial search engines, such as Google,
Yahoo or Bing, to perform an actual search. The client is responsible for receiving
the user’s requests, submitting the requests to the PMSE server, displaying the
returned results, and collecting his/her clickthroughs in order to derive his/her
personal preferences. The PMSE server, on the other hand, is responsible for
handling heavy tasks such as forwarding the requests to a commercial search
engine, as well as training and reranking of search results before they are returned
to the client. The user profiles for specific users are stored on the PMSE clients,
thus preserving privacy to the users. PMSE has been prototyped with PMSE clients
on the Google Android platform and the PMSE server on a PC server to validate
the proposed ideas. We also recognize that the same content or location concept
may have different degrees of importance to different users and different queries.
To formally characterize the diversity of the concepts associated with a query and
their relevances to the user’s need, we introduce the notion of content and location
entropies to measure the amount of content and location information associated
with a query. Similarly, to measure how much the user is interested in the content
and/or location information in the results, we propose click content and location
entropies. Based on these entropies, we develop a method to estimate the
personalization effectiveness for a particular query of a given user, which is then
used to strike a balanced combination between the content and location
preferences. The results are reranked according to the user’s content and location
preferences before returning to the client. The main contributions of this paper are
as follows:
• This paper studies the unique characteristics of content and location concepts, and
provides a coherent strategy using a client-server architecture to integrate them into
a uniform solution for the mobile environment.
• The proposed personalized mobile search engine, PMSE, is an innovative
approach for personalizing web search results. By mining content and location
concepts for user profiling, it utilizes both the content and location preferences to
personalize search results for a user.
• PMSE incorporates a user’s physical locations in the personalization process. We
conduct experiments to study the influence of a user’s GPS locations in
personalization. The results show that GPS locations helps improve retrieval
effectiveness for location queries (i.e., queries that retrieve lots of location
information).
• We propose a new and realistic system design for PMSE. Our design adopts the
server-client model in which user queries are forwarded to a PMSE server for
processing the training and reranking quickly. We implement a working prototype
of the PMSE clients on the Google Android platform, and the PMSE server on a
PC to validate the proposed ideas. Empirical results show that our design can
efficiently handle user requests.
• Privacy preservation is a challenging issue in PMSE, where users send their user
profiles along with queries to the PMSE server to obtain personalized search
results. PMSE addresses the privacy issue by allowing users to control their
privacy levels with two privacy parameters, minDistance and expRatio. Empirical
results show that our proposal facilitates smooth privacy preserving control, while
maintaining good ranking quality.
• We conduct a comprehensive set of experiments to evaluate the performance of
the proposed PMSE. Empirical results show that the ontology-based user profiles
can successfully capture users’ content and location preferences and utilize the
preferences to produce relevant results for the users. It significantly out-performs
existing strategies which use either content or location preference only.
1.2 Features
Minimum time needed for the various processing.
Greater efficiency.
Better service.
User friendliness and interactive.
Minimum time required.
1.3 Organization of Chapters
In Chapter 1 we introduce about the project concept and give an overview
idea about the project. In Chapter 2, we discuss about the project domain and the
detailed description of existing systems by analysis the literature survey of the
existing techniques. We also then presented about the techniques and methods of
our proposed methods. In our proposed method we also listed out the advantages
of using our proposed method. Then we presented the differences between the
existing system and proposed system as a tabular representation stating the
advantages of our proposed system. In Chapter 3, we made a system analysis of the
methods we propose. In Chapter 4, we listed the Hardware requirements and
Software Requirements of our project. In Chapter 5, we presented the modules and
their description. Then we also depicted the Use-case diagram of our project, then
we depicted Class diagram of our project. In Chapter 6, we concluded our proposal
and then in Chapter 7 we list out our references made for our proposed method.
CHAPTER 2- LITERATURE SURVEY
2.1 About the Domain
Data mining (the advanced analysis step of the "Knowledge Discovery in
Databases" process, or KDD), an interdisciplinary subfield of computer science, is
the computational process of discovering patterns in large data sets involving
methods at the intersection of artificial intelligence, machine learning, statistics,
and database systems. The overall goal of the data mining process is to extract
information from a data set and transform it into an understandable structure for
further use. Aside from the raw analysis step, it involves database and data
management aspects, data preprocessing, model and inference considerations,
interestingness metrics, complexity considerations, post-processing of discovered
structures, visualization, and online updating.
The term is a buzzword, and is frequently misused to mean any form of large-scale
data or information processing (collection, extraction, warehousing, analysis, and
statistics) but is also generalized to any kind of computer decision support system,
including artificial intelligence, machine learning, and business intelligence. In the
proper use of the word, the key term is discovery, commonly defined as "detecting
something new". Even the popular book "Data mining: Practical machine learning
tools and techniques with Java" (which covers mostly machine learning material)
was originally to be named just "Practical machine learning", and the term "data
mining" was only added for marketing reasons. Often the more general terms
"(large scale) data analysis", or "analytics" – or when referring to actual methods,
artificial intelligence and machine learning – are more appropriate.
The actual data mining task is the automatic or semi-automatic analysis of large
quantities of data to extract previously unknown interesting patterns such as groups
of data records (cluster analysis), unusual records (anomaly detection) and
dependencies (association rule mining). This usually involves using database
techniques such as spatial indices. These patterns can then be seen as a kind of
summary of the input data, and may be used in further analysis or, for example, in
machine learning and predictive analytics. For example, the data mining step might
identify multiple groups in the data, which can then be used to obtain more
accurate prediction results by a decision support system. Neither the data
collection, data preparation, nor result interpretation and reporting are part of the
data mining step, but do belong to the overall KDD process as additional steps.
The related terms data dredging, data fishing, and data snooping refer to the use of
data mining methods to sample parts of a larger population data set that are (or
may be) too small for reliable statistical inferences to be made about the validity of
any patterns discovered. These methods can, however, be used in creating new
hypotheses to test against the larger data populations.
2.2 Literature Survey
1) Improving web search ranking by incorporating user behavior information
AUTHORS: E. Agichtein, E. Brill, and S. Dumais
We show that incorporating user behavior data can significantly improve ordering
of top results in real web search setting. We examine alternatives for incorporating
feedback into the ranking process and explore the contributions of user feedback
compared to other common web search features. We report results of a large scale
evaluation over 3,000 queries and 12 million user interactions with a popular web
search engine. We show that incorporating implicit feedback can augment other
features, improving the accuracy of a competitive web search ranking algorithms
by as much as 31% relative to the original performance.
2) Learning user interaction models for predicting web search result
preferences
AUTHORS: E. Agichtein, E. Brill, S. Dumais, and R. Ragno
Evaluating user preferences of web search results is crucial for search engine
development, deployment, and maintenance. We present a real-world study of
modeling the behavior of web search users to predict web search result
preferences. Accurate modeling and interpretation of user behavior has important
applications to ranking, click spam detection, web search personalization, and
other tasks. Our key insight to improving robustness of interpreting implicit
feedback is to model query-dependent deviations from the expected "noisy" user
behavior. We show that our model of clickthrough interpretation improves
prediction accuracy over state-of-the-art clickthrough methods. We generalize our
approach to model user behavior beyond clickthrough, which results in higher
preference prediction accuracy than models based on clickthrough information
alone. We report results of a large-scale experimental evaluation that show
substantial improvements over published implicit feedback interpretation methods.
3) Efficient query processing in geographic web search engines
AUTHORS: Y.-Y. Chen, T. Suel, and A. Markowetz
Geographic web search engines allow users to constrain and order search results in
an intuitive manner by focusing a query on a particular geographic region.
Geographic search technology, also called local search, has recently received
significant interest from major search engine companies. Academic research in this
area has focused primarily on techniques for extracting geographic knowledge
from the web. In this paper, we study the problem of efficient query processing in
scalable geographic search engines. Query processing is a major bottleneck in
standard web search engines, and the main reason for the thousands of machines
used by the major engines. Geographic search engine query processing is different
in that it requires a combination of text and spatial data processing techniques. We
propose several algorithms for efficient query processing in geographic search
engines, integrate them into an existing web search query processor, and evaluate
them on large sets of real data and query traces.
4) Analysis of geographic queries in a search engine log
AUTHORS: Q. Gan, J. Attenberg, A. Markowetz, and T. Suel
Geography is becoming increasingly important in web search. Search engines can
often return better results to users by analyzing features such as user location or
geographic terms in web pages and user queries. This is also of great commercial
value as it enables location specific advertising and improved search for local
businesses. As a result, major search companies have invested significant resources
into geographic search technologies, also often called local search.
This paper studies geographic search queries, i.e., text queries such as "hotel new
york" that employ geographical terms in an attempt to restrict results to a particular
region or location. Our main motivation is to identify opportunities for improving
geographical search and related technologies, and we perform an analysis of 36
million queries of the recently released AOL query trace. First, we identify typical
properties of geographic search (geo) queries based on a manual examination of
several thousand queries. Based on these observations, we build a classifier that
separates the trace into geo and non-geo queries. We then investigate the properties
of geo queries in more detail, and relate them to web sites and users associated
with such queries. We also propose a new taxonomy for geographic search queries.
5) Optimizing search engines using clickthrough data
AUTHORS: T. Joachims
This paper presents an approach to automatically optimizing the retrieval quality of
search engines using clickthrough data. Intuitively, a good information retrieval
system should present relevant documents high in the ranking, with less relevant
documents following below. While previous approaches to learning retrieval
functions from examples exist, they typically require training data generated from
relevance judgments by experts. This makes them difficult and expensive to apply.
The goal of this paper is to develop a method that utilizes clickthrough data for
training, namely the query-log of the search engine in connection with the log of
links the users clicked on in the presented ranking. Such clickthrough data is
available in abundance and can be recorded at very low cost. Taking a Support
Vector Machine (SVM) approach, this paper presents a method for learning
retrieval functions. From a theoretical perspective, this method is shown to be
well-founded in a risk minimization framework. Furthermore, it is shown to be
feasible even for large sets of queries and features. The theoretical results are
verified in a controlled experiment. It shows that the method can effectively adapt
the retrieval function of a meta-search engine to a particular group of users,
outperforming Google in terms of retrieval quality after only a couple of hundred
training examples.
CHAPTER 3- SYSTEM ANALYSIS
EXISTING SYSTEM:
A major problem in mobile search is that the interactions between the users and
search engines are limited by the small form factors of the mobile devices. As a
result, mobile users tend to submit shorter, hence, more ambiguous queries
compared to their web search counterparts. In order to return highly relevant
results to the users, mobile search engines must be able to profile the users’
interests and personalize the search results according to the users’ profiles.
PROPOSED SYSTEM:
In this paper, we propose a realistic design for PMSE by adopting the metasearch
approach which replies on one of the commercial search engines, such as Google,
Yahoo or Bing, to perform an actual search. The client is responsible for receiving
the user’s requests, submitting the requests to the PMSE server, displaying the
returned results, and collecting his/her clickthroughs in order to derive his/her
personal preferences. The PMSE server, on the other hand, is responsible for
handling heavy tasks such as forwarding the requests to a commercial search
engine, as well as training and reranking of search results before they are returned
to the client. The user profiles for specific users are stored on the PMSE clients,
thus preserving privacy to the users. PMSE has been prototyped with PMSE clients
on the Google Android platform and the PMSE server on a PC server to validate
the proposed ideas.
SYSTEM STUDY
FEASIBILITY STUDY
The feasibility of the project is analyzed in this phase and business
proposal is put forth with a very general plan for the project and some cost
estimates. During system analysis the feasibility study of the proposed system is to
be carried out. This is to ensure that the proposed system is not a burden to the
company. For feasibility analysis, some understanding of the major requirements
for the system is essential.
Three key considerations involved in the feasibility analysis are
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will
have on the organization. The amount of fund that the company can pour into the
research and development of the system is limited. The expenditures must be
justified. Thus the developed system as well within the budget and this was
achieved because most of the technologies used are freely available. Only the
customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have a high
demand on the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on the
client. The developed system must have a modest requirement, as only minimal or
null changes are required for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to use the system efficiently.
The user must not feel threatened by the system, instead must accept it as a
necessity. The level of acceptance by the users solely depends on the methods that
are employed to educate the user about the system and to make him familiar with
it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.
CHAPTER 4- SYSTEM REQUIREMENTS
SYSTEM MODELS
HARDWARE REQUIREMENT
CPU type : Intel Pentium 4
Clock speed : 3.0 GHz
Ram size : 512 MB
Hard disk capacity : 40 GB
Monitor type : 15 Inch color monitor
Keyboard type : internet keyboard
Mobile : ANDROID MOBILE
SOFTWARE REQUIREMENT
Operating System: Andro id
Language : ANDROID SDK 2 .3 Documentation : Ms-Office
CHAPTER 5- MODULE DESCRIPTION
MODULES:
Mobile Client
PMSE Server
Re-Rank Search Results
Ontology update and Clickthrough collection
MODULE DESCRIPTION:
Mobile Client:
In the PMSE’s client-server architecture, PMSE clients are responsible for
storing the user clickthroughs and the ontologies derived from the PMSE server.
Simple tasks, such as updating clickthroughs and ontologies, creating feature
vectors, and displaying re-ranked search results are handled by the PMSE clients
with limited computational power. Moreover, in order to minimize the data
transmission between client and server, the PMSE client would only need to
submit a query together with the feature vectors to the PMSE server, and the server
would automatically return a set of re-ranked search results according to the
preferences stated in the feature vectors. The data transmission cost is minimized,
because only the essential data (i.e., query, feature vectors, ontologies and search
results) are transmitted between client and server during the personalization
process.
PMSE Server:
Heavy tasks, such as RSVM training and re-ranking of search results, are
handled by the PMSE server. PMSE Server’s design addressed the issues: (1)
limited computational power on mobile devices, and (2) data transmission
minimization. PMSE consists of two major activities: 1) Re-ranking the search
results at the PMSE server, and 2) Ontology update and clickthroughs collection at
a mobile client.
Re-ranking the search results
When a user submits a query on the PMSE client, the query together with
the feature vectors containing the user’s content and location preferences (i.e.,
filtered ontologies according to the user’s privacy setting) are forwarded to the
PMSE server, which in turn obtains the search results from the backend search
engine (i.e., Google). The content and location concepts are extracted from the
search results and organized into ontologies to capture the relationships between
the concepts. The server is used to perform ontology extraction for its speed. The
feature vectors from the client are then used in RSVM training to obtain a content
weight vector and a location weight vector, representing the user interests based on
the user’s content and location preferences for the re-ranking. Again, the training
process is performed on the server for its speed. The search results are then re-
ranked according to the weight vectors obtained from the RSVM training. Finally,
the re-ranked results and the extracted ontologies for the personalization of future
queries are returned to the client
Ontology update and Clickthrough collection:
The ontologies returned from the PMSE server contain the concept space
that models the relationships between the concepts extracted from the search
results. They are stored in the ontology database on the client 1 . When the user
clicks on a search result, the clickthrough data together with the associated content
and location concepts are stored in the clickthrough database on the client. The
clickthroughs are stored on the PMSE clients, so the PMSE server does not know
the exact set of documents that the user has clicked on. This design allows user
privacy to be preserved in certain degree. If the user is concerned with his/her own
privacy, the privacy level can be set to high so that only limited personal
information will be included in the feature vectors and passed along to the PMSE
server for the personalization. On the other hand, if a user wants more accurate
results according to his/her preferences; the privacy level can be set to low so that
the PMSE server can use the full feature vectors to maximize the personalization
effect.
INPUT DESIGN
The input design is the link between the information system and the user. It
comprises the developing specification and procedures for data preparation and
those steps are necessary to put transaction data in to a usable form for processing
can be achieved by inspecting the computer to read data from a written or printed
document or it can occur by having people keying the data directly into the system.
The design of input focuses on controlling the amount of input required,
controlling the errors, avoiding delay, avoiding extra steps and keeping the process
simple. The input is designed in such a way so that it provides security and ease of
use with retaining the privacy. Input Design considered the following things:
What data should be given as input?
How the data should be arranged or coded?
The dialog to guide the operating personnel in providing input.
Methods for preparing input validations and steps to follow when error
occur.
OBJECTIVES
1.Input Design is the process of converting a user-oriented description of the input
into a computer-based system. This design is important to avoid errors in the data
input process and show the correct direction to the management for getting correct
information from the computerized system.
2. It is achieved by creating user-friendly screens for the data entry to handle large
volume of data. The goal of designing input is to make data entry easier and to be
free from errors. The data entry screen is designed in such a way that all the data
manipulates can be performed. It also provides record viewing facilities.
3. When the data is entered it will check for its validity. Data can be entered with
the help of screens. Appropriate messages are provided as when needed so that the
user will not be in maize of instant. Thus the objective of input design is to create
an input layout that is easy to follow
OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and presents
the information clearly. In any system results of processing are communicated to
the users and to other system through outputs. In output design it is determined
how the information is to be displaced for immediate need and also the hard copy
output. It is the most important and direct source information to the user. Efficient
and intelligent output design improves the system’s relationship to help user
decision-making.
1. Designing computer output should proceed in an organized, well thought out
manner; the right output must be developed while ensuring that each output
element is designed so that people will find the system can use easily and
effectively. When analysis design computer output, they should Identify the
specific output that is needed to meet the requirements.
2. Select methods for presenting information.
3. Create document, report, or other formats that contain information produced by
the system.
The output form of an information system should accomplish one or more of the
following objectives.
Convey information about past activities, current status or projections of the
Future.
Signal important events, opportunities, problems, or warnings.
Trigger an action.
Confirm an action.
SOFTWARE ENVIRONMENT
Android is a software stack for mobile devices that includes an operating system,
middleware and key applications. Google Inc. purchased the initial developer of
the software, Android Inc., in 2005.
Android's mobile operating system is based on the Linux kernel. Google and other
members of the Open Handset Alliance collaborated on Android's development
and release.
The Android Open Source Project (AOSP) is tasked with the maintenance and
further development of Android. The Android operating system is the world's best-
selling Smartphone platform.[
The Android SDK provides the tools and APIs necessary to begin developing
applications Android platform using the Java programming language. Android
has a large community of developers writing applications ("apps") that extend
the functionality of the devices. There are currently over 250,000 apps available
for Android.
.Features
Application framework enabling reuse and replacement of components
Dalvik virtual machine optimized for mobile devices
Integrated browser based on the open source WebKit engine