Top Banner
21

P.I.N.G. Issue 9.1

Mar 31, 2016

Download

Documents

Ping

The official technical magazine of PICT IEEE Student Branch.
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: P.I.N.G. Issue 9.1
Page 2: P.I.N.G. Issue 9.1

1 P.I.N.G. ISSUE 9.1. AUGUST 2013 CREDENZ.INFO/PING

The PICT IEEE Student Branch (PISB) was established in the Pune Institute of Computer Technology (PICT) with the goal of exploring the diverse opportunities provided by the IEEE Society. Celebrating its Silver Jubilee this year, PISB has witnessed a radical change from a small group of technology enthusiasts who started PISB to one of the biggest Student Branches in IEEE Region 10. It also witnessed the inauguration of the Women in Engineering (WIE) Chapter in PISB by Mrs Lila Poonawalla in July 2013, the first woman mechanical engineer to become the CEO of an Indian company.

Credenz is the Student Branch’s annual technical symposium held in the Fall Season which provides a comprehensive platform for students from Universities nation-wise to showcase their talents and capabilities.

This year Credenz’13 celebrates a decade of successful and flourishing Events and offers a package of creativity, technicality and entertainment for students and professionals from diverse fields. This year hopes to change the digital outlook of Credenz by introducing three new Online Events. Bringing unparalleled and unique competitions together under one roof, Credenz’13 will surely prove to be ‘A Confluence of Technologies’.

P.I.N.G. (PICT IEEE Newsletter Group) is the official magazine of PISB which aims at enriching the technical expertise of students, teachers and professionals, alike. Started with the intent of providing a stage for students to show their technical prowess, this issue serves as a memorandum of the glorious journey of P.I.N.G. so far. In this issue, we decided to make P.I.N.G. more interactive, and as such, we proudly present the ‘Interview’ Section, which features a prominent personality from the field of Technology.

We thank our contributing authors for their insightful articles and hope to see and welcome them at Credenz’13. In conclusion, we acknowledge the ardent and vehement support of our team and junior volunteers, without their diligent perseverance, the culmination of this issue would not have been possible.

Page 3: P.I.N.G. Issue 9.1

Dear Readers,

I am happy to write this foreword for the latest edition of P.I.N.G. This is a unique activity of the PICT IEEE student Branch. P.I.N.G. (PICT IEEE Newsletter Group) has made its impact not only in PICT or the IEEE Pune Section, but also at the IEEE India Council and the IEEE Head Office level. This news letter provides a platform for all, including student members, to showcase their talents and views while strengthening their IEEE activities.

WIE Student Chapter was also inaugrated at PISB, headed by Aditi Tatti. I hope to see the formation of other such units and Special Interest Groups (SIGS). We have also seen the inauguration of this chapter by Mrs. Lila Poonawalla on 17th July 2013. She is the first woman engineer to become a CEO in Indian business history.

At PISB, we try our level best to create an environment where students can keep themselves updated with emerging trends, technologies and innovations. I would like to refer to a quote here: “Nothing lasts forever and nothing is ever in equilibrium. Innovation is the only constant. Innovation is always resisted and often retarded, but rarely extinguished.” – Richard Koch

In this context, I would like to put forward a recent research result from the Massachusetts Institute of Technology which stated, “For 65 years, most information-theoretic analysis of cryptographic systems have made a mathematical assumption that turns out to be wrong”. This result clearly indicates that encryption is less secure than we thought. Another breakthrough, by Prof. Siddharth Ramachandran, proved that we can use twisted light to send data through optical fibers. Researchers demonstrated the transmission of 1.6 terabits of data per second over one kilometre of optical fiber.

As we know, the world is going through a challenging phase, where enhancing employability and performance is a growing concern. Staying technically current is very important. The roles of technical societies like IEEE are very crucial in providing a platform for professional networking, career resources & recognition and continuing education. At PISB, many events are conducted throughout the year and are widely appreciated by students, acclaimed academicians and industry professionals alike. The events include IEEE Day, workshops, Special Interest Group (SIG) activities, Credenz and Credenz Tech Days. Credenz is the annual technical event held in September each year.

I thank all the authors for their contributions and interest. On behalf of the IEEE Computer Society Pune Chapter and the IEEE Pune Section, I wish the PISB as well this newsletter a grand success. I heartily congratulate the P.I.N.G. team for their commendable efforts.

flashbacksReminiscence of former members

Three years back I signed myself up to be a part of an organization that would inspire me to do something apart from the daily humdrum of college life. In the years that followed, little did I know, that with us, PISB itself would also grow and become one of the most popular chapters of IEEE. Since its inception, PISB has achieved several laurels and awards such as the ‘Highest number of members in a Student Branch’, thrice worldwide, the IEEE ‘Member-get-a-Member’ award, etc. One particular section under this branch which has grown distinctively is the PICT IEEE Newsletter Group (P.I.N.G.). It went on from being a simple college-level newsletter to a prominent technical magazine, being acknowledged not only by industry professionals throughout the IEEE fraternity, but also appreciated by foreign delegates visiting various All India Student Conferences.

Despite the change in the number of members, event dynamics, hurdles and facilities, what has remained constant through all these years is the unyielding spirit of this Student Branch, the will to make it better every approaching day, the enthusiasm and of course, the remarkable and unforgettable friendships which are forged along the way; something I will cherish the most!

-Vashishtha AdtaniEx Secretary, PICT IEEE Student Branch

If there’s one thing about my engineering life that I wish I never had to let go, it has to be undoubtedly the PICT IEEE Student Branch. We, as a Student Branch, had numerous activities going on throughout the year, be it working for Credenz, IEEE Day, IEEE Xtreme in one semester or Credenz Tech Dayz, NCC and Team Appointments in the other. PISB is the one thing that kept me going and helped me pass through the monotony of engineering life. I would like to tell all the current members of PISB, enjoy being a part of it to the fullest because this ride is definitely going to be one of the best phases of not only your engineering but your entire life. Cherish the lessons that you get to learn throughout this journey and make some amazing friends. PISB has taught me a great deal of things; you can expect the same out of it and I bet it won’t disappoint you. Cheers to PISB!

-Kshitiz DangeEx Chairperson, PICT IEEE Student Branch

My association with PISB began when I was in my second-year of engineering, and today, three years hence, it is overwhelming to see the branch grow with each passing year. It has broadened its horizon while reaching out to a multitude of people with its contagious enthusiasm and remarkable events. This newsletter too, has gone far and beyond the college premises, with industry experts, faculty members and students from across the country publishing articles and contributing to its success. The achievements of an organization are the results of the combined effort of each individual, and this student branch is driven by a bunch of truly zealous and dedicated students. With the generous support and encouragement of all the people connected with PISB, the success stories shall continue to reverberate for a long time to come!

Dr Rajesh B. IngleBranch Counsellor

-Dimple ShahEx P.I.N.G. Editor

3 P.I.N.G. ISSUE 9.1. AUGUST 2013 CREDENZ.INFO/PING CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 4

Page 4: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 6CREDENZ.INFO/PING5 P.I.N.G. ISSUE 9.1. AUGUST 2013

Business IntelligenceEnhancing your business facets by transforming data into meaningful information

In today’s competitive global business environment, understanding and managing enterprise wide data is crucial for making timely

decisions and responding to changing business conditions. Tremendous amount of data is generated by day-to-day business operations and applications. In addition, there is an abundance of valuable data available from external sources such as market research organizations, independent surveys and quality testing labs. Business Intelligence has emerged as an increasingly popular and powerful concept of applying information technology to turn these huge islands of data into meaningful information for better business decisions.

“How you gather, manage and use information will determine whether you win or lose.” - Bill Gates

Business Intelligence helps to improve:1. Quality and speed of decisions2. Customer service3. Integration and consistency of information4. Quickly spotting business opportunities5. Reduction in costs6. Overall data quality

As per the Gartner research team, Business Intelligence and Analytics will be the top priority of CIOs in the year 2013 and the years that follow.

Typically, a Business Intelligence Suite consists of four fundamental components:

1. Data Warehouse: A Database to store the data.2. Extraction, Transformation and Loading (ETL): An ETL process to extract data from various sources and load into the Data Warehouse database.3. On-Line Analytical Processing (OLAP): Analysis/Query/Reporting of the data stored in the Data Warehouse.4. Data Mining: Statistical analysis of data stored in the Data Warehouse for forecasting and prediction of data trends.

A typical end-to-end business intelligence project execution covers various phases as depicted below:

1. Data Modelling: Structure of a warehouse or mart that would help the OLAP or reporting systems to create reports easily and thereby contributing in decision making.2. ETL Process: Covers identification of various heterogeneous sources like mainframe data, RDBMS data, external agencies data, etc. Mapping of this data and its transformation rules such that it fits into the structure defined in step 1.3. Data Profiling and Cleansing: Identification of data anomalies and methods to fix this data during ETL process to get it in the required structure and quality.4. Metadata Management: Maintaining the data and its transformations from the operational source to target BI systems.5. OLAP and Reporting: Analytical Modelling and reporting of where data gets converted into information.6. Scheduling: Of ETL and reporting processes to bring in automation.

Future of Business Intelligence

Business Analytics is going to be the centre for business model re-invention. The statistics published by Gartner this year highlights the future scope of business intelligence. The Market-leading organizations are embracing continuous improvement and innovation while leveraging the three basic approaches to BI:1. Deciphering what happened2. Impacting the here and now3. Creating a new future

Along with the existing BI facets, the future of BI is moving towards predictive analysis, data visualization, real-time decision-making, In-

memory analytics, Big Data, NoSQL databases and Mobile BI. Some of these advanced BI areas utilize object oriented technology stacks as well.Thus, competencies needed to use BI have evolved from a reactive reporting model, to a real-time and event-triggered “we can have an immediate impact” model, to a method of modelling future scenarios that reflects a “Which opportunities will we seize?” culture.

BitWise has been a trusted partner to its clients for over 15 years in building and maturing their business intelligence systems that help them to make effective business decisions. BitWise’s technical experience and expertise enables us to partner with industry leaders like MarkLogic, Tableau, Talend, Tibco, Microsoft etc. who are the leaders in BI technology.

To satisfy your Business Intelligence knowledge appetite, Join us at http://forums.bitwiseglobal.com

expertspeakInsights from the Industry

-BitWise Solutions Pvt. Ltd.

Australian researchers are trying to build a micro robot that would mimic the swim stroke used by E. coli bacteria which can take the biopsy of any patient by being injected in their blood stream.

Micro robot

Page 5: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 8CREDENZ.INFO/PING7 P.I.N.G. ISSUE 9.1. AUGUST 2013

Disruptive technologiesOr just another fizz in the innovation bubble

Consider this, in the early 19th century a car on the streets would be a sight to behold. People would gather around this novel

invention and stare wide eyed, marveling at every perceived “magical” aspect of it. To them, a car was like a page direct out of a science fiction novel. To them, horses and horse carriages was the de facto way to travel. Fast forward to the present, a car on the street, no one bats an eye. However, a couple moving around in a carriage, well history just might repeat itself.

So what caused such a massive change? The example described above is a manifestation of the effects of a disruption. A disruption in the way things get done. For the uninitiated, Disruptive Technologies are technological breakthroughs or innovations that have recently emerged from research that can have a significant hand in altering an existing market or in some exceptional cases, creating a market all on its own by radically altering the way things get done.

Let us now take a look at the latest and greatest advances in technology that we should have an eye out for. Disruptors to watch out for:

Additive ManufacturingWe all know about 3D printing, right? Additive Manufacturing is the industrial version of 3-D printing. It is being used to produce plastic prototypes at a rapid pace. You can have a custom model ready in much less time and at a fraction of cost. 3-D printing is gaining a lot of traction in terms of publicity with people coming up with new and novel ways to apply the technology. Additive manufacturing is the disruptor in this domain. Following this, General Electric has invested heavily into R&D to have the ability to quickly manufacture prototypes of its complex parts.

How does it work?There are several methods behind additive manufacturing. The one that has the

potential to disrupt and which was identified by GE to be of tremendous strategic importance is the Sintering Process. Selective laser sintering (SLS) is one form of sintering used primarily in 3-D printing. SLS relies on a laser to melt a flame-retardant plastic powder, which then solidifies to form the printed layer. Sintering is naturally compatible with building metal objects because metal manufacturing often requires some type of melting and reshaping. It is waste-free and light-weight. Verdict: Additive manufacturing has the potential to disrupt the entire manufacturing industry.

Any item that can be manufactured using this technology has the potential to get affected, leaving us on the verge of a major industrial revolution.

Memory ImplantsA maverick neuroscientist, Theodore Berger, believes he has deciphered the code by which the brain forms long term memories. He believes he can help patients recover from severe memory loss via electronic implants. Berger was involved in designing silicon chips that mimic signal processing akin to neurons. He believes he is now capable of creating long term implantable memory chips that can act as reservoirs of data for retrieval. The idea seems so preposterous that many of his colleagues considered him crazy. Given the success of his recent experiments, he has

maneuvered himself as a credible visionary. His experiments show how a silicon chip, externally connected to monkey brains via electrodes, can process information just like actual neurons. In an impressive experiment, Berger demonstrated that he could also help monkeys retrieve long-term memories from the part of brain that stores them.

Verdict: Memory implants maybe way down the pipeline; however, the technology might be the greatest booster to human intellect. It is definitely a disruptor that borders on the edge of being probable or exceedingly crazy.

Photosynthetic Energy ConversionWe all depend on plants for our sustenance. Now, we might have to start relying on them for our energy needs.

In a revolutionary research conducted at the Nano Electrochemistry Laboratory (NEL), it has been proven that plant parts can be bootstrapped to electrical circuits in order to trap the electrons that plants convert from sunlight. They are interrupting the pathway of natural photosynthesis by making use of the intermediate product through the electrons that are being produced during photosynthesis.

They have worked this out by extracting the plant machinery that drives photosynthetic reaction called thylakoids and immobilizing them on a bed of carbon nanotubes, which act as an electrical conductor, capturing the electrons and sending them along a wire. The major hurdle is making it long lasting. Currently there is no mechanism to replenish this layer of photosynthetic cells.

Verdict: Not mature unless the drawbacks are addressed. If all goes well, it might be disruptive.

Deep LearningRay Kurzweil, a scientist dabbling in Big Data and Artificial Intelligence, is coming up with a truly intelligent computer that can mimic the way our mind works. His goal is to come up with a computer that is inherently intelligent like the human brain. A computer that is able to understand language, make inferences and decisions on its own. After

realizing he needed access to Google scale data and computing power, he approached Larry Page and explained his idea. Meanwhile, Larry was researching deep into deep learning already (no pun intended). Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns to recognize patterns in digital representations of sounds, images, and other data.

The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old. But due to improvements in mathematical formulae and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before.

Verdict: Google is currently very secretive about the prospects of Deep learning. Some prominent applications could be: Better image search, better audio video interaction with computers, automated car driving systems, better search results with machines being able to understand what exactly you want from the search etc. The applications of Deep Learning are endless and this is one technology that might just change the world.

On this note, we come to an end. The purpose of this article is not to amaze the reader by the technical advances that are happening but rather to bring focus on what is more challenging as human beings, the broader perspective of technology. Making a cool app or a novel way of optimizing things with a flashy and more user friendly design might be awesome but the core researches are happening in a confluence of streams. The actual research that matters is when we, as computer scientists, collaborate with professionals across multiple fields to bring our respective expertise together to solve the really difficult problems. We, as an Industry, have a pivotal role in the development of mankind and this article is just a window to the enormous possibilities that lie further up the road. Think about it. Act upon it.

-Aamir MushtaqSoftware Engineer

NTT Data

Simulating 1 second of human brain activity takes 82,944 processors: Using the NEST software framework, researchers in Japan managed to silmulate 1 second of biological brain processing time, requiring one PB of effective system memory.

NEST software

P.I.N.G. Featured Article

Page 6: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 10CREDENZ.INFO/PING9 P.I.N.G. ISSUE 9.1. AUGUST 2013

In today’s booming web and mobile application market that is filled with competition between various vendors giving users varied options, the

quality of the product becomes the deciding factor. The complexity involved and the time and effort required to achieve this quality product tends to be underestimated. The top 5 most common software QA (Quality Assurance) myths prevalent in the software industry are:

Quality Assurance = TestingQA professionals are more than just testers, they are individuals who understand the development process, have an analytical mindset, practice good communication skills, and possess technical acumen. A good quality assurance encompasses the entire software development process from the very start of requirements gathering all the way to delivery and maintenance.

Development complete? Now let’s start QAA lot of projects are planned with a certain amount of testing to be carried out once development has been completed, which only seems sensible as it allows you to test and fix the completed system in its entirety through a number of quality assurance

cycles. It is always cheaper to fix bugs earlier in the development cycle rather than waiting towards the end where they are likely to have become more deep-seated and the code won’t be so fresh in the developer’s mind.

Automate everything and eliminate manual testing!Automated tests can reduce the need for some repetitive manual testing but they tend to use the same set of inputs on each occasion. A piece of software that consistently passes a rigid set of automated tests may not fare so well once it is subjected to more random and unpredictable inputs of human testers. The expert eye of a seasoned quality assurance professional will provide a more rigorous test than an automated script. Effective testing is best achieved through a combination of automated and manual testing.

Effective QAing = 100% Bug-Free SoftwareComplex systems will never be completely ‘bug-free’. The goal should be to ensure that the coverage is provided to a point where in the business and the stakeholders are comfortable that their risk is mitigated to an acceptable level. On larger systems, the maintenance phase of the life-cycle is principally concerned with managing an on-going list of “known defects” that are tolerated because the overall level of system quality is regarded as sufficient.

Repetition might be tedious, but going by that logic, programming, web designing, analysis, accounting, banking and any day-to-day vital activities like eating, sleeping etc. can also be considered boring if you just look at the ‘repetition’ part. One of the key traits of a really good QA professional is that he/she would probably look at testing as an information gathering activity done with intent of exploring and discovering answers to questions that nobody had asked before. So now, does that sound boring to you?

Understand how Quality Assurance fits into the software development lifecycle

QA Myth busters The Virtual InevitabilityTranscending into the cybernetic future

It is said that when we dream, we live in a virtual world. But what exactly is a virtual world? Anything that we can touch, see or feel

is real, and creating something that feels real is virtual. To get to know more about virtualization and its benefits we need to understand all about virtualization along with its implications.

Virtualization in the IT language refers to the various techniques of creating a virtual version of something, such as a virtual Desktop, Server or Storage/Data Center. The virtual IT world is typically used to save infrastructure costs, increase efficiency and reduce regular maintenance intervals. Data Centers are more than just a virtual storage space, they are the lifeblood of Internet and telecommunications. Without them, making telephone calls or surfing the web would be near impossible task. The key benefits to have a virtualized setup are as follows:

Reduced heatServers generate heat; the only way to reduce the heat is to use fewer servers. Virtual servers use less physical hardware and thus reduce heat generation.

Faster redeployWhen we use a physical server and it dies, the redeploy time depends on a number of factors: Is the backup server ready? Do we have an image of our server? With virtualization, the redeployment can be completed within minutes. Virtual machine snapshots can be enabled with just a few clicks and with the virtual backup tools, redeploying images will be so fast that the end users will hardly notice there was an issue.

No vendor lock-inOne of the good points about virtualization is the abstraction between software and hardware. The

virtual machines don’t really care what hardware they run on. This means we don’t have to be tied down to one particular vendor.

Better disaster recoveryDisaster recovery is easier when the data center is virtualized. With up-to-date snapshots of the virtual machines, restoration becomes quicker. And should disaster strike the data center itself, we can always move those virtual machines elsewhere.

Single-minded serversWith the all-in-one services server, not only are we looking at a single point of failure, but also services competing with resources as well as with each other. With virtualization, we can have a cost-effective route to separate various servers. This will result in a much more robust and reliable data center.

Virtualization offers a powerful way to help relieve the typical headaches that plague the administrators day in and day out. If you haven’t already begun to make use of virtualization in your data center, it’s time you start. Even if you migrate only a simple file server to virtualized technology, you’ll quickly see the benefits and eventually, you may want your entire data center virtualized.

-Nikhil KhatriCalsoft Inc.

-Jesal MistrySoftware QA Consutant

Thoughtworks

Intel comes up with an improved hybrid optical fibre technology: MXC, that’s designed to provide speeds near 1.6 tpbs. The fibre is a combination of silicon photonics and a new form of corning fibre. The fibre is designed essentially for server usage and is the one of the first practical improvements to the fibre optics technology.

MXC

Page 7: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 12CREDENZ.INFO/PING11 P.I.N.G. ISSUE 9.1. AUGUST 2013

Quantum Computing

Will we ever have the amount of computing power that we need or want? If, as Moore’s Law states, the number of transistors

on a microprocessor continues to double every 18 months, the year 2020 or 2030 will find the circuits on a microprocessor measured in an atomic scale. The next logical step will be to create quantum computers which will harness the power of atoms and molecules to perform memory and processing tasks. Quantum computers have the potential to perform certain calculations significantly faster than any silicon-based computer.

While computers have been around for the majority of the 20th century, quantum computing was first theorized less than 30 years ago by a physicist at the Argonne National Laboratory. Paul Benioff is credited with the first application of quantum theory to computers in 1981. Benioff theorized about creating a quantum Turing machine.

Today’s computers, like a Turing machine, work by manipulating bits that exist in one of two states: a 0 or a 1. Quantum computers aren’t limited to two states; they encode information as quantum bits, or qubits, which can exist in superposition of 0 and 1. Qubits may represent atoms, ions, photons or electrons and their respective control devices that are working together to act as computer memory and a processor. Because a quantum computer can contain these multiple states simultaneously, it has the potential to be millions of times more powerful than today’s most powerful supercomputers.

However, there is a problem with this- the ‘observation dilemma, originating from the famous Uncertainty principle.

In the strange world of quantum computers, if you try to look at the subatomic particles, you could bump them and thereby change their value. A quantum system collapses into a classical system

when it is directly ‘observed’. So, if you look at a qubit in superposition to determine its value, the qubit will assume the value of either 0 or 1, but not both (effectively turning your spiffy quantum computer into a mundane digital computer). To make a practical quantum computer, scientists have to devise ways of making indirect measurements to preserve the system’s integrity. To solve the problem, quantum computers also utilize another aspect of quantum mechanics known as quantum entanglement. In quantum physics, if you apply an outside force to two atoms, it can cause them to become entangled, and the second atom can take on the properties of the first atom. So if left alone, an atom will spin in all directions. The instant it is disturbed, it chooses one spin or one value and at the same time, the second entangled atom will choose an opposite spin or value. This allows scientists to know the value of the qubits without actually looking at them.

Integer factorization with an ordinary computer is believed to be computationally infeasible for large integers if they are the product of a few prime numbers (e.g. products of two 300-digit primes). By comparison, a quantum computer could efficiently solve this problem using Shor’s algorithm to find its factors. This ability would allow a quantum computer to decrypt many of the cryptographic systems we use today, in the sense that there would be a polynomially determinable algorithm for solving this problem. In particular, most of the popular public key ciphers are based on the difficulty of factorization integers (or the related discrete logarithmic problem which can also be solved by Shor’s algorithm), including forms of RSA. These are used to protect secure Web pages, encrypted email, and many other types of data. Breaking these would have significant ramifications in electronic privacy and security. Besides factorization and discrete logarithms, quantum algorithms offering a more than polynomial speedup over the best known classical algorithm have been developed for several problems, including the simulation of quantum physical processes from chemistry and solid state physics, the approximation of Jones polynomials, and solving Pell’s equation. No mathematical proof has been found that shows that an equally fast

classical algorithm cannot be discovered, although this is considered unlikely. For some problems, quantum computers offer a polynomial speedup. The most well-known example of this is quantum database search, which can be solved by Grover’s algorithm using quadratically fewer queries to the database than those required by classical algorithms. In this case, the advantage is provable.

One of the greatest challenges is controlling or removing the quantum decoherence. This usually means isolating the system from its environment as interactions with the external world causes the system to decohere. However, other sources of decoherence also exist. Examples include the quantum gates, the lattice vibrations and background nuclear spin of the physical system used to implement the qubits. Decoherence is irreversible as it is non-unitary and is usually something that should be highly controlled, if not avoided. Decoherence times for candidate systems, typically range between nanoseconds and seconds at low temperatures. These issues are much more difficult for optical approaches as the timescales are orders of magnitude shorter and an often-cited approach to overcoming them is optical pulse shaping. Error rates are typically proportional to the ratio of operating time to decoherence time, hence any operation must be completed much more quickly than the decoherence time.

Quantum computers could one day replace silicon chips, just like the transistor once replaced the vacuum tube. But for now, the technology required to develop such a quantum computer is beyond our reach. Most research in quantum computing is still very theoretical. The most advanced quantum computers have not gone beyond manipulating more than 16 qubits, meaning that they are a far cry from practical application. However, quantum computers one day could perform calculations easier and quicker than our incredibly time-consuming conventional computers. The quest is on for a brave new world where atoms, molecules and wave functions may play the role of Sherlock Holmes or James Bond!

-Dr Tirthajyoti SarkarFairchild Semiconductor

Pune

Envisioning future computing on an atomic scale

By the early 1970s, hacker “Cap’n Crunch” (a.k.a. John Draper) had used a toy whistle to match the 2,600-hertz tone used by AT&T’s long-distance switching system. This gave him access to call routing (and brief access to jail).

Draper’s whistle

Page 8: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 14CREDENZ.INFO/PING13 P.I.N.G. ISSUE 9.1. AUGUST 2013

Golden age of learningThe paradigm shift to virtual classrooms

Today any regular student from any University can boast about having done courses from premier institutes like Stanford, Princeton

or MIT. It is not just one student who can boast of this. Consider this statistic; there were eighty thousand students from around the world in ‘Introduction to Statistics’ at Coursera in 2012. This year the number is expected to easily cross the hundred thousand mark.

The classroom has practically transitioned to the ‘cloud’, cloud referring to the World Wide Web here. The high-quality learning that was initially the privilege of a chosen few is now literally open to anyone and everyone who is interested, provided they have a decent internet connection.

MOOC (Massively Open Online Course) platforms like Coursera and Udacity have the potential to bring about a paradigm change in higher education. Courses ranging from psychology to computer science, from music to economics, offer a wide gamut of gems. While both the aforementioned

ones are for-profit organizations, there are other non-profit initiatives as well. For e.g. MIT OCW (Open Courseware) is a superb collection of over six hundred courses from MIT that include course lecture notes, videos, assignments and all shebang. Similarly on the lines of Coursera and Udacity, both MIT and Harvard started their own MOOC platform called edX, which has its own collection of fantastical offerings.

TED is a platform where achievers from different areas of specialization come up with a plethora of topics. The topics range from how 3D printing can help create future airplanes, to how malaria is being fought by the Melinda-Gates foundation. The talks introduce you to an infinite world of possibilities and one can really get an adrenalin boost wondering on the many cool things that are happening in the world.

We cannot complete the discussion about online learning portals without talking about Khan Academy. Immensely popular and built on a

different model, they have small capsules on specific topics, ranging from advanced Economics to elementary algebra. However, as we move forward, we must question ourselves: Are MOOCs all good, or is there a potential threat lurking somewhere?

In the human history, this probably is a moment of inflexion. Are we moving towards a really golden era of learning? Or is this another fad? What happens to the relevance of existing universities and their curriculum? Will the students prefer learning Algorithms from Stanford University and ignore their classroom faculty? Will it force the existing faculty members to put in extra effort to justify that they add more value than a non-interactive session?

In the last 10-15 years, there has been a complete shift towards being open and collaborative. Consider the open source software development model; of which Linux being its poster child. The source code is open to anyone and everyone with a programming aptitude and one can go read the source code to understand the internals of how this OS works.

The OpportunitiesAs a student, one should not be limited by the restrictions of their college or skills of college faculty. One should have the flexibility to access the best material from the best available sources from the best professors. As an individual who is hungry to learn, this is a very exciting development, and it is not just in formal education.

Moreover, it is not just as a student that I can get to exploit the full potential of these portals. Swami Vivekananda succinctly put it in his words, ‘learning is living’. As a professional, one can access the material to refine or revise their skills. The reasons could be manifold. One can be more effective at the workplace with new knowledge, pursue new external opportunities, use this new found knowledge to impress a chosen few at cocktail parties, or, the finest, can simply learn for the pure pleasure of learning.

The ThreatsMOOCs are portals to immense number of options,

and that’s where the first threat lies. With the success of Coursera and Udacity, it is safe to assume that we will soon have even more players on the web. The good news is that the users will have more options, and the bad news is that now we will have the overhead to select the best based on certain criteria. Numerous options can overwhelm a person. With Coursera launching courses on Algorithms from both Stanford and Princeton in quick succession, one can get into mental fatigue because one wants to attend both! How do we select? Will it again be driven by herd mentality? What if sub-optimal material gets pitched as superior stuff simply because of better packaging and marketing? There are a lot of what-ifs but I don’t have the answers.

What about the relevance of our current educational institutions? An online exchange can never match up to an interactive session with an expert faculty, but it will still be a far better alternative than mindless faculties who just provide a lip-service to the subject. Coursera has also started offering ‘Signature Track’, where for a small fee, a student can actually get a certificate of completion from the reputed University offering such courses. With this development, the question of relevance is even more important now. Are we moving towards an era of open online education? Students decide in the privacy of their rooms what they want to learn, when they want to learn, getting their certificates, and the employers accordingly judge the students. Is this the end of the regulated confined classroom approach as we know it?

Well mostly the answer is ‘No’. The two systems have the potential to exist parallelly and the reason for these ‘famous’ platforms are the faculty members from reputed Universities who teach there.

To summarize, I personally believe that the opportunities far outweigh the potential threats. If one is judicious enough, there is a lot to harness from these platforms. I would say ‘Jai Ho’ to these developments and being a teacher myself, the future of learning and education indeed seems bright to me.

our mentor speaksImparting words of wisdom

[email protected]

The most powerful supercomputer system in the world was built to analyse the data generated by the LHC (Large Hadron Collider). It’s called the Grid and is formed from tens of thousands of interconnected computers scattered around the world. The data recorded by each of the big experiments at the LHC will fill around 100,000 dual-layer DVDs every year.

Grid

Page 9: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING15 P.I.N.G. ISSUE 9.1. AUGUST 2013

The new Solar Cell GenerationSensitized Solar Cells that enhance possibilities for renewable energy resources

Most of the power generated nowadays is produced by using fossil fuels. It emits tons of carbon dioxide, carbon monoxide

and other pollutants every second which are very harmful to the society. More importantly fossil fuels will eventually run out. In order to make the development of our civilization sustainable and to cause less harm to the environment, efforts are being taken to establish a low-carbon society by reducing dependence on fossil fuels including petroleum, coal and natural gas. Because of increasing demands of clean and alternative sources of energy, the solar energy industry is one of the fastest growing enterprises in the market. Nowadays, there are several directions for solar energy technology development. For example, photovoltaic systems directly convert the solar energy into electrical energy while concentrated solar power systems first convert the solar energy into thermal energy and then further convert it into electrical energy through a thermal engine. In all these processes, the initial installation cost is high. Therefore, these types of technologies are not used by our society to its full extent. An attempt is being taken to provide non-conventional resources of energy to our society at an affordable price.

The first generation of solar cell was designed by using high purity and defect free crystalline silicon(c-Si). The cost of producing highly purified silicon is very high. Because of the high material cost and low conversion efficiency, the cost of power production by these cells is several times more than that produced by conventional sources. The second generation of solar cell was developed by using amorphous silicon (a-Si) which can be deposited over a large area by the process of chemical vapor deposition (CVD) from the gaseous phase of silicon compounds. In this process, material requirement is less and also a-Si has a better optical absorption coefficient. Efficiency achieved in a-Si solar cell is around 8-10%. Cost of production is also reduced to

a great extent. Long exposure to light results in light degradation effect which reduces the efficiency and life time of the cell.

The third generation of solar cell is designed using organic material is known as Dye sensitized solar cells (DSSC), popularly known as organic solar cell. Unlike conventional Silicon based solar cells, dye-sensitized solar cells consist primarily of photosensitive dye and other substances. DSSC is able to generate electricity by converting energy from light absorbed by the dye which is likely equivalent to the process of photosynthesis wherein light energy is converted to chemical energy. DSSC can be produced from low cost materials using simple manufacturing process like coating and printing. This means a significant cost reduction by 1/5th to 1/10th as compared to silicon solar cells. The average efficiency achieved is 9-10%. These types of solar cells can be installed on the roof of the house, motor garages and also for interior decoration of homes as a source of power. To some extent DSSC can store the energy, enabling it to provide consistent power during night. The main problem of these cells is that it is sensitive to temperature and UV rays. If the efficiency of the cell can be increased +using a mixed dye, then in the near future, this technology might provide an alternate source of clean energy at an affordable price.

When light is incident on the dye, it enters an excited state enabling it to emit electrons. The electrons pass through the transparent electrode and move towards the catalytic electrode through external circuit. The tri-iodide ions in the electrolyte gel accept the electrons and gets reduced into iodide ions. The iodide ions in the gel pass the electrons back to the dye molecules which is oxidized into tri-iodide ions. The dye molecules continue to emit electrons as long it is exposed to light. This results the generation of electricity.

-Dr K. C. NandiProfessor of Physics

Pune Institute of Computer TechnologyPune

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 16

OFDMThe backbone of 4th generation mobile communication

Nowadays, demand for wireless communication with high data rates is rapidly increasing. Mobile internet, Video

on demand, internet gaming, high-definition mobile TV, video conferencing, 3D television and cloud computing have created a thirst for high speed mobile wireless services. To fulfill these demands, the major challenges for service providers are Frequency selective fading, Intersymbol Interference, Diversity gain, Data capacity, Power efficiency and Bandwidth gain. Orthogonal Frequency Division Multiplexing (OFDM) is the most promising modulation/multiplexing scheme that almost overcomes these challenges; it has been adopted in physical layer of many existing wireless as well as wire line networks.

OFDMOFDM is a multicarrier system that uses Inverse Fast Fourier Transform (IFFT) and Fast Fourier Transform (FFT) to transmit and receive data in a digital wireless network. Available bandwidth is divided into many narrow bands called subcarriers. These subcarriers are orthogonal to each other and follow sin(x)/x spectra. The data is transmitted in parallel on these subcarriers.

RegulationThe International Telecommunications Union (ITU) coordinates to improve telecommunication infrastructure and assists in the development and coordination of worldwide technical standards. The most recent standard defined by ITU-R is IMT-2000, which is already implemented as 3G. After reviewing the current needs and demands by global telecommunication market, ITU-R has defined a new standard called IMT-Advanced for the next generation mobile communication network, which is also known as 4G.

The 4th Generation Mobile Communication NetworkThe core specification for 4G standards is data rate of 100 megabits per second (Mbit/s) for high

mobility communication (such as from trains and cars) and 1 gigabit per second (Gbit/s) for low mobility communication (such as pedestrians and stationary users). To implement these specifications, two agencies known as the 3GPP group and IEEE started their work with WiMAX and named their standard as 4G-LTE (Long term Evolution) and Mobile WiMAX 802.16m respectively. Both agencies have adopted OFDM as a physical layer standard in different flavors.

Various Wireless Standards based on OFDM:1. Digital Audio Broadcasting: DAB/EUREKA 147, DAB+, Digital Radio Mondiale, HD Radio, T-DMB and ISDB-TSB.2. Terrestrial Digital Video Broadcasting: DVB-T and ISDB-T, DVB-H, T-DMB, and MediaFLO forward link.3. Wireless PAN, ultra-wideband (UWB) IEEE 802.15.3a implementation suggested by WiMedia Alliance.4. Wireless LAN radio interfaces: IEEE 802.11a, g, n and HIPERLAN/2.5. Wireless MAN: broadband wireless access (BWA) standard IEEE 802.16e (or Mobile-WiMAX).6. The mobile broadband wireless access (MBWA) standard IEEE 802.20.7. Flash OFDM Cellular System.

-Zakee AhmedAsst. Professor, E&TC Department

Pune Institute of Computer TechnologyPune

SPIRI is an autonomous airborne linux-powered device built-in with sensors, cameras, wifi and cloud support which works as a programmable aerial transmitter to search, record and transmit the data it visualizes.

SPIRI

Page 10: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING17 P.I.N.G. ISSUE 9.1. AUGUST 2013

Project LoonTaking Wi-Fi sky-high

If you think of a company with unique, absurd yet brilliant ideas, Google is the first name that strikes us. From Google maps to Google glass,

Google has been coming up with the most radically remarkable ideas. The latest idea from their bag of discoveries is Project Loon.

The goalTo provide quality broadband service to the billions living without it.

The principle: Balloons are cheap and can use clean green energy.

How does it work?Several balloons are launched and stationed about 20 kms above the ground. At this height in the stratosphere, there are two layers of winds, clockwise and anticlockwise. These can be used to steer and sail the balloons all across the globe. The loon balloons, termed so by Google, are super pressurized, allowing them to stay aloft for over a 100 days.

How do they beam the internet?Signals are transmitted from these balloons to specialized internet antennas which use radio frequency technology mounted to the side of a home or workplace. Web traffic that travels through the balloon network is ultimately relayed to ground stations where it connects to a pre-existing internet infrastructure.

How are the antennas powered?The antennas are powered by using the best renewable source of energy available to mankind-Sunlight. It makes use of solar panels to power the balloon antennas and the equipments required to establish communication. Due to this, the energy is stored in batteries, which is efficiently used during night.

What about air-traffic? At 20 km above ground level, there is absolutely no air-traffic. So it is safe as a distance of 10 kms exists between the nearest plane and the loons.

What about the Speed and Security?During its first pilot test in New Zealand, Google expects speed comparable to 3g. This is at least a 100 times faster than the speed available in remote areas. With regard to the security of data, the balloon has enough technology to encrypt the data while transmitting.

Rounding it up, it is a brilliant idea. Its execution can soon be witnessed around the world. If successful, it will definitely revolutionize the lives of billions of people currently having snail-speed internet, or even worse, no internet.

-Aditya ShirolePune Institute of Computer Technology

Pune

technocrats speakIntelligentsia on the rise

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 18

ConceptNetSemantic networking for a smarter AI

In earlier days, it seemed as though the only way to build a common sense knowledge base was by hiring expert

engineers. However, in the year 2000, the success of open source and free distribution gave rise to the Open Mind Common Sense knowledge base. Built by more than 14000 authors around the globe, the OMCS forms the foundation of ConceptNet. Its latest version, ConceptNet 5, created by Rob Speer(A researcher at MIT Media Lab) and his team, is considered as one of the smartest AI systems till date.

ConceptNet is a semantic network of interrelated nodes. Each node is represented as a concept which is stored as a phrase of Natural Language. This can be illustrated using the following example:

Used forPen ------------------------------------------> Writing(Node) (Justification) (Node)

ConceptNet5 is a hypergraph (A graph having edges about edges). Each relationship has several justifications pointing to it, indicating the reliability of the justifications. ConceptNet 5, however, is not a conventional graph database. The nodes of the graph have no identification. An edge in ConceptNet 5 is an instance of an assertion, learned from a knowledgeable source. The database can be represented using JSON (JavaScript Object Notation) and CSV (Comma Separated Value) format. A complicated graph can be simplified by taking the union of the branches. Capabilities:ConceptNet performs several human like language processing functions:

Affect sensingConceptNet maintains a set of possible emotions like anger, sadness, happiness etc. An unknown concept can be classified by all the paths that lead

to this set.

Novel– concept IdentificationThis is the ability to deduce meanings of those concepts which are not present in the existing database by drawing comparisons to analogous things.

DisambiguationThis is the central task of the Natural Language Processing system, disambiguating the meaning of a word given the context in which it appears.

The road aheadNLP researchers have recently IQ-tested ConceptNet 4, the predecessor of ConceptNet 5. The research proved that the AI system had the smartness and the logical reasoning ability of a four year old child. The scores, however, were quite varied across different areas. For example, it performed quite poorly in comprehension, which has been attributed to the lack of a good program.

These developments raise many questions about the Future of AI. Till now, AI has always been a theoretical field, largely related to experiments and research. As technology advances and breakthroughs like ConceptNet occur, how far are we from the Age of Intelligent Machines?

-Arijit PandePune Institute of Computer Technology

Pune

Page 11: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING19 P.I.N.G. ISSUE 9.1. AUGUST 2013

BitcoinsThe virtual ruling crypto currency

Peer to peer technology has touched almost all sectors, from instant messaging and file sharing to search engines. 2009 saw the

conceptualization of electronic money in the form of bitcoins. Financial transactions can take place between people regardless of them having a bank account or a credit card. Its only pre-requisite is an internet connection and a computational device. Satoshi Nakamoto, a 37 year old Japanese mathematician, published this paper in 2008 and is regarded to be the inventor of this ground breaking concept.

Bitcoin, the first ever crypto currency, is basically a peer to peer network technology based on digital money. Users can use bitcoins for transactional

purposes in all available currency. Be it gadgets, home appliances, property, clothing, or donations, bitcoins can be used everywhere provided the vendor accepts them (Wordpress.com, Etsy vendors, Reddit Gold, Namecheap et al). These transactions take place by the widely used bitcoin exchanges such as Mtgox or Bitstamp. These transactions use a public key encryption technique which contains a public as well as a private key, both of which are used for

encryption as well as for decryption. Public and private keys are both related to one another. Private key has to be securely stored in your digital wallet (a small database at the user’s disposal consisting of all of his bit addresses and the corresponding transactions) and is not to be revealed to anyone. The private key along with the hashed message is treated with a transformation to output a unique digital signature. This signature is used for the verification along with the original message and the public key which checks the validity of the concerned transaction.

Double spending is avoided by publicly displaying all the deals made by a particular bit address along with its bitcoin numbers on the block chain. The

block describes awaiting transactions. After the transactions have been completed, each block is linked with the previous block to become a part of the block chain. These block chains can be viewed by users via software applications on personal computers or mobiles.

One can acquire bitcoins by either buying them with real money or by mining them. Actual processing of

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 20

the transactions is done by the miners who handle and manage its liquidity. The miner is supposed to possess devices with computational power which can solve a highly complex mathematical problem and acquire bitcoins in return if the device is lucky enough to crack the correct solution first. This problem is nothing but the generation of a string with the help of a hash function having a large number of zeroes at its beginning which gives the order of the transaction without any ambiguity. The probability of getting the correct solution is approximated to be 1 in a trillionth. However, mining is free for anyone. An Intel i7 chip with an average processing speed of 2.6 GHz can produce 6.7 million hashes per second, which can yield 0.0005 BTC per day. Various types of ASIC and FPGA have to be used for mining purposes which are costly and consume a lot of power. Hence it is advised not to use personal computers of average computational power. Another alternative to obtain bitcoins is by purchasing them using currency exchanges like Dwolla.

Every day, only a fixed number of coins can be mined with the current rate of 25 BTC per block. However, the total number of bitcoins is restricted to 21 million with its mining rate being halved every 4 years (i.e. after every 210000 transactions) to ensure this goal is achieved by 2140. One can never destroy a bitcoin, you either have to sell it or redeem it into its equivalent local currency. Bitcoins charge a

minimum of 0.55% for any transactions, which is quite reasonable in comparison to online transactions which is about 2-3%. This fee can be utilized later to reward the miners after all 21 million bitcoins have been mined. Not the government but you have control over your money. You also have an option of anonymity, revealing nothing other than your bit address and the bitcoins linked with it.

Talking of disadvantages, the price of bitcoins has always been volatile. It is vulnerable to losses due to its digital form. Black market and drugs like marijuana have been associated with bitcoins by the media which discourages enthusiasts. There is always a possibility of bitcoins getting stolen by brute force or wallet stealing. Due to the anonymity feature of bitcoins, internet transactions on the deep web (World Wide Web content which is not indexed by usual search engines) are dealt with via bitcoins adding another negative factor.

All said and done, the bitcoin economy has experienced many ups and downs since its inception. Though this concept has seeped into the minds of half of the investors, some do believe that this stage of bitcoins is similar to that of what the internet had gone through.

-Kruti ChauhanPune Institute of Computer Technology

Pune

Google has bought about 100 companies since the year 2010, including the likes of Youtube, GrandCentral (now serviced as Google Voice), which is equivalent of buying 1 company per week since 2010.

Google

Page 12: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING21 P.I.N.G. ISSUE 9.1. AUGUST 2013

A leap of faithUshering in an era of gesture based human-machine interaction. Is it the end of the Mouse as we know it?

Interacting with a computer by moving your hands around like Tom Cruise did in the movie ‘Minority Report’, a concept made

popular by numerous sci-fi movies and cartoons, is now a step closer to reality thanks to a new and very affordable motion controller. Pitched by its founder as a technology aimed at replacing the computer mouse, which he called “a needless layer of technological complexity”, the Leap Motion controller is surprisingly small for what it can do. But in its current state, could it really kill the mouse?

Okay, so many of you would be wondering-“What am I even reading about? Killing a mouse? Rubbish!”. So, before we head off to the main content, allow me to save you the need to Google up this marvel of technology.

The Leap Motion controller is a USB peripheral sensor for human-computer interaction on Microsoft Windows and Mac OS, X with Linux support in development, created by a company

called ‘Leap Motion’. Using two cameras and three infrared LEDs, the device observes a roughly hemispherical area, to a distance of about 1 meter (3 feet). It is designed to track fingers which cross into the observed area. The smaller observation area and the higher resolution of the device differentiates the product from the Microsoft X-Box Kinect sensor, which is more suitable for whole-body tracking in a space the size of a living room. Along with fingers and hands, objects like pencils can also be used to interact with the Leap Motion.

Many people have been testing it for the past couple of weeks and they’ve found the answer to be “No”. “No”? Was there a question somewhere, you’re thinking? Right, you must’ve been dazzled by the geeky material above.

So, the question was, “In its current state, can the Leap Motion controller kill the mouse?” The answer to that, as many people have found out after testing it extensively, is “No”.

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 22

Here are some reasons why many think it won’t replace the mouse in a hurry:1. Your arms would constantly get tired after using it, something that apparently will be its main drawback.2. It is virtually impossible to use as a pointer to click exact spots on a screen. With your hand and fingers constantly in mid-air, they’d have a tendency to shake or sway, causing the input to do so as well. This makes for a very difficult and jumpy pointing experience.3. You’d long for something physical to hold onto. It’s surprising how much you miss the feeling of a mouse and haptic feedback when you would put it away in a drawer.

Negatives aside, the Leap Motion is very cool when it works and when you have enough energy to use it. Being able to use one of your hands to navigate around the Earth using Google Earth, for example, is spectacular. It would give a new meaning to having the whole world in your hands as you traverse the globe by moving up and down and pivoting from left to right.

Speaking of apps, as of the 8th of August 2013, there were more than 80 apps available for download in ‘Airspace’, the Leap Motion’s app store. These apps were designed specifically for the device and were vetted by the Leap Motion’s creators. A number of websites have been made compatible with the device too, and there are a couple of browser plug-ins available for download on the Chrome store.

Gamers will find an app called ‘GameWAVE’ very useful, as it makes it possible to control almost any existing keyboard or mouse-based game.

“This is as close to being a wizard as you’re ever going to get,” quipped one user after using Leap Motion to control his character in the massively multiplayer online role-playing game (MMORPG) ‘World of Warcraft’.

‘Cut the Rope’, a game that’s been on smartphones and tablets for quite a while and has spawned a couple of even more interesting sequels, is very addictive on the Leap Motion. For the few who aren’t familiar, in this game you slash ropes with

your finger to deliver a candy treat to a tiny monster named ‘Om-Nom’ while collecting stars along the way. Childish? So what.

There are a number of other apps, including the ‘NYTimes for Leap Motion’ app, which displays the New York Times’ top news feed. You could scroll through a news article by using a 1-finger clockwise or anti-clockwise gesture. This certainly, is a great use of gestures.

‘Molecules’ is also really interesting and it’s what people with a curiosity for science are absolutely going to love. Instead of putting the whole world in your hand, like Google Earth does, it puts all the different types of molecules in your hands. With it you can zoom in on the make-up of a sample DNA segment, caffeine or even insulin. Geeky? Yeah Right!

In conclusion, we can’t say that the Leap Motion will disrupt the widespread use of the mouse in its current state. But in the future? Maybe.If anything, I think it will bring gestures to computers rather than being a full-on pointing device for precision and accuracy. Wacom tablets (primarily used by graphic artists and designers) and the mouse should be used for precision, whereas the Leap Motion should be used for stuff like browsing through material, scrolling through articles and Googling selected text using gestures.But before it can be universally adopted, it needs to set some standards.

The experience is what makes it priceless. I mean, the Google Earth app actually makes you feel like you’re the Man of Steel, flying around the Earth. Also, did I mention that you get a Leap Motion SDK along with the device, which you can use to start making your own apps? I’d say this little wonder toy is definitely worth the buy. Enjoy LEAPing!

-Debojeet ChatterjeePune Institute of Computer Technology

Pune

The SwiMP3 Waterproof MP3 player is an MP3 player that projects the sound waves directly into your ear bones instead of using headphones, which works under-water as well. Instead of projecting the sound into your ear like conventional headphones or ear phones, this is placed between the ear and the cheek bone, and projects the sound directly to the ear bones.

SwiMP3 Waterproof Player

Page 13: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING23 P.I.N.G. ISSUE 9.1. AUGUST 2013

Telescopic contact lensOptical vision made crystal clear

Ever imagined how fascinating it would be if we could zoom in and zoom out whenever desired simply by blinking? The telescopic

contact lens is one such invention. It has been designed by a team of Swiss designers and researchers led by Joseph Ford, a professor of electrical engineering in UC San Diego. Its main motive is to improve vision for patients suffering from Age-related Macular Degeneration (AMD), for whom it is difficult to read or recognize faces.

This lens allows the user to see zoomed in versions of something that they have trouble seeing. The image is created by taking in light and magnifying it at the periphery. The contacts magnify the user’s view by 2.8 times at the periphery, but there is no magnification at the center. The zoom function of the contact lenses is always in activated state, so in order to switch it ON and OFF, the users will have to wear liquid crystal TV 3D glasses. The liquid crystals in the glass electrically change the polarization of the light which allows it to selectively block the light and direct it either to the center or periphery of the lens to create normal or magnified vision.The lens is approximately 1.17 mm thick and the peripheral part of the lens, also known as the telescopic part, consists of aluminium mirrors, fit

tightly together to create a ring-shaped telescope embedded in the contact lens.

The prototype has been tested on a life-sized optomechanical model of the eye. By using complex imaging system, the image was obtained on computers. It has been concluded that this lens provides a very clear image. Usually, the regular image is clear, but the magnified image

is a bit blurred. Also, the magnified image produces abnormal colours due to the shape of the lens, which makes it difficult

for the brain to accept the image. But this is the best clarity that the researchers have achieved so far.

The lenses are made of polymethyl methacrylate( PMMA), a gas-impermeable polymer, which was used in the old, hard lenses. The rigidity causes the user to wear them only for a short duration. Another drawback of this lens is its thickness. While the standard contact lens is just 80 microns thick, this lens is roughly about 11 times thicker.

Researchers are working on changing the material of the lens which will avoid irritation of the eye by providing sufficient oxygen to the cornea and allowing the users to wear the lenses for a longer duration. They are also looking for a softer material which will make the lens more flexible. Due to its shape, which has been designed so as to fit the eye, grooves have been added to the surface to correct the aberration of colours.

The team is also trying to develop the lens which has grooves in the surface that will polarize the incoming light. This will make it possible for the user to use the lenses without 3-D glasses.

-Janhavi KulkarniPune Institute of Computer Technology

Pune

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 24

SpintronicsAn emerging approach enabling quantum possibilities in the world of nanotechnology

Spintronics or magneto electronics, is an emerging technology utilizing both, the intrinsic spin of the electron and its associated

magnetic moment, in addition to its fundamental electronic charge in solid state devices.What exactly is the spin of an electron? The answer to this question lies in the meaning of the word ‘spin’. Spin of an electron refers to the rotating motion of the electron about itself. The magnetic moment of the electron is intrinsic to its spin. These properties of the electron are widely used in quantum physics.T e m p e r a t u r e measurement can be considered an easy task using reference to the mercury levels. So why use nanoscale thermometers? N a n o s c a l e t h e r m o m e t e r s can be used to measure the temperature of human cells and also other nanospace which will prove to be vital in thermal management of electronics and monitoring of structural integrity of high performance materials. Along with this, it opens up a new arena in medical research.

How is the temperature measured?Imperfections known as nitrogen vacancies engineered in diamonds are used as nano-scale thermometers. The vacancy traps an electron and the trapped electron behaves as an isolated atom.The variation in temperature, even in nano-scale amounts, causes the lattice structure of the diamond to change by expanding and contracting. So, where does spintronics come into picture? The change in the lattice structure of the diamond induces change

in the spin properties of the trapped atoms in the nitrogen vacancy. This change is measured using a laser based technique. This helps in monitoring temperatures over a long range.The diamond sensors are very small in size, about 100 nanometers, wherein each one of the sensors contains multiple vacancies to trap electrons. Thus, these sensors can be embedded together to form a nanoscale thermometer which can measure the temperature of an area smaller than a cell.

Where can we use these thermometers?This technique could help in understanding heat dissipation in integrated circuits and analysis of the same. This may further lead to combining various components on a single chip without the fear of

the chip melting down. Also it may be possible to monitor the nanoscale cracking and d e g r a d a t i o n caused by the t e m p e r a t u r e gradients in the materials and c o m p o n e n t s , which operate at very high t e m p e r a t u r e s . This technique

can have an application in chemical industry as it may be possible to study and analyze the thermal behavior of chemical reactions.

The spinning motion of a single electron opens up a new arena of technologies in quantum and atomic fields which are ultimately used for the creation of new technologies in other areas. Understanding the exact motion of an electron, spin properties prove to play a vital role in developing these technologies. This definitely answers the question how a small particle such as an electron and its spin motion can move up to take place in the greater good of the World!

-Manasi GodsePune Institute of Computer Technology

Pune

World’s smallest battery is 6 times thinner than a bacterium: Researchers at the Rice University have developed the world’s smallest battery that is just 150 nanometers in width — makes it six(6) thinner than a bacterium, hundreds of times thinner than a human hair and about 60,000 times smaller than a typical AAA battery.

World’s smallest battery

Page 14: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING25 P.I.N.G. ISSUE 9.1. AUGUST 2013

Lithium ion batteriesThe battery technology that fought closely with nickel-cadmium to emerge as the most preferred portable power source. Ever mused WHY?

The testing of the Lithium metal as a potential portable power source started way back in 1912 under

G.N. Lewis, the physical chemist famous for his representation of electron pairs. However, it wasn’t until the 1960s that the technology became stable. The first commercial sell-outs of non-rechargeable batteries started in the 1970s. Attempts to develop rechargeable Lithium batteries were unsuccessful due to various stability issues. Hence the research shifted to ionic compounds of Lithium. The first commercial rechargeable Li-ion battery was made available by the Sony Corporation in the year 1991.

Li-ion batteries outperform the other battery technologies in several aspects. They have twice the energy density as compared to Nickel-Cadmium (Ni-Cd) batteries possessing the same power. The peak voltage produced by a standard single-cell Li-ion battery is about 3.6V, higher than the corresponding Ni-Cd battery that produces about 1.2-1.5 V.

Another notable feature of Li-ion batteries is their slow discharge-rates. This in turn means a higher stand-by time for portable electronic devices. The discharge-rates are almost half of the Ni-Cd batteries. While purchasing electronic gadgets, the batteries are pre-charged to about 30% capacity indicating that they retain a considerable amount of charge long after their first charging, when they are packed. Li-ion batteries can be suitably modified to produce higher current discharge as per requirement i.e. a higher ‘mAh’ rating (milliAmpere-hour rating) can be achieved. While the average life of regularly-used

Ni-Cd batteries is about 2-3 years, Li-ion batteries last longer by about 3-4 years. Moreover, they require very low maintenance and can be shaped as per the requirement although it is difficult to manufacture very thin batteries. A few limitations of Li-ion batteries are that their production costs are higher as compared to other batteries by about 40%, they have transportation restrictions as they are sensitive to certain electromagnetic radiations and they require complex protection circuits as they incur permanent damage if discharged below a specific voltage.New Lithium-ion compounds are being researched to improve the performance of Li-ion batteries. With every new stable chemical composition, the capacity of these batteries increases significantly. This technology has truly taken mobility forward a million leaps. The new battery technologies that are becoming increasingly commercial are Li-ion-polymer batteries and the Ni-metal-hydride technologies. Although these are old technologies, their extensive commercial applications are now possible and can potentially challenge the Li-ion batteries. However, these batteries are on a back-foot when compared on the basis of properties like current discharge and overall life. As technology improves, we can very well expect more advanced batteries that make life easier for us. Till then, Li-ion it is!

-Nikhil KulkarniPune Institute of Computer Technology

PuneCREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 26

Exploring the Connected Car Cross-vehicular communication for optimizing traffic mobility

The automobile industry worldwide is working on vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) technologies

that would allow vehicles to connect to an ad-hoc road and other car networks, using a secure 5.9GHz Dedicated Short Range Communication (DSRC) Wi-Fi band. The aim of this ‘connected car’ is passenger safety, traffic de-congestion and overall efficiency of the transportation system. Car accidents are the leading cause of death all around the world, and surprisingly, most incidents have nothing to do with drugs or alcohol. Most of the times, the driver simply ’didn’t see it coming’. But a connected car might.

We can usher a paradigm shift in our driving experience by enabling our cars to talk to each other, to the road and even to the traffic signals. Cars can relay information such as who is braking hard, swerving into another lane, failing to maintain safe distance, speeding into a common intersection etc. This can be achieved by ‘telematics’ and aftermarket devices which are on-board diagnostics and communication technologies. Currently, the development of telematics is being done by General Motors, Ford Motor, Toyota, Volkswagen, Mercedes-Benz and Nissan.

It could lead to the prevention of more than 70% of the annual car crashes that don’t involve impaired drivers. Connections can also bring in emergency medical services, weather, streamlined traffic flow and real-time adaptive navigation. Drivers can anticipate a pile-up or even prevent it from happening and coordinate amongst each other in a better way. Thus this system of connected cars is not only reactive, but also proactive.

Summed up by Rahul Mangharam, an engineer at the University of Pennsylvania, “We can stop looking at a car as one system and look at it as a single node in a network.”

Going one step further, the concept of connected cars and its infrastructure can be used to reserve a parking space before reaching the destination, tracking down an intelligent cab, autonomous driving, self-parking and a host of other such possibilities. The world of Automobile Communication approaches realization!

-Neeraj WaghPune Institute of Computer Technology

Pune

Twitter was originally used via SMS (Short Messaging System) which has 160 character limits. Twitter chose 140 to leave room for user addresses which accounts for the other 20 characters. Hence, twitter tweets are only 140 characters long.

Twitter

Page 15: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 28

living beings, the behaviour selection method mimicking their thinking mechanism is needed. For this purpose, the probability-based mechanism of thought was proposed based on probabilistic knowledge links between input (assumed fact) and target (behaviour) symbols for reasoning. However, real intelligent creatures including human beings select a behaviour based on the multi-criteria decision making process considering the degree of consideration (DoC) for input symbols, i.e. will and context symbols, saved in their memory.

Behaviour-based robotics or behavioural robotics is an approach in robotics that focuses on robots that are able to exhibit complex-appearing behaviours. Most behaviour-based systems are also reactive, which means they need no programming of internal representations of what a chair looks like, or what kind of surface the robot is moving on. Instead all the information is gleaned from the input of the robot’s sensors. The robot uses that information to gradually correct its actions according to the changes in immediate environment.

Behaviour-based robots (BBR) usually show more biological-appearing actions than their computing-intensive counterparts, which are very deliberate in their actions. Genetic intelligence in robots is yet an unexplored field as there are few intelligent parent robots to go on with. Under cognitive intelligence we are moving towards increasingly complex technical systems that combine mobility with perception, reasoning, and action generation, engineers will need a thorough interdisciplinary understanding of the basics of all these fields, along with a good understanding of their interrelation and their application. Social intelligence for robots

includes the ability of a robot to tell whether people are paying attention to him or not. This ability would help us to introduce robots in the human world more effectively.

Ubiquitous robot is also known as ubibot, a third generation of robotics, which can give us various services at any place as well as any time in a ubiquitous space through a network. The previously ubiquitous robotics architecture is unable to remove the conflicting behaviour as well as unable to operate in uncertain and dynamic environment. The technology is ripe for developing combat robots which function as part of a group. The social insect paradigm is a good place to start, where the approach might not be to see how intelligent one can make a robot bug or critter, but how stupid it can be and still accomplish its mission. Robots can

be extremely useful while being short of having human i n t e l l i g e n c e . And human intelligence in robots, especially combat robots, might not always be desirable even if it were achievable.

This shows us the other side of the coin faced by Artificial Intelligence. Do we really want to place the intelligence

achieved by us after years of evolution placed in a non-living being? Is there such a dire need now that automating with the aid of robots is not enough and now we want them to think for us as well? Of course, some would think our technology would not progress unless we explore these new avenues but we should use the already available technology for human life betterment rather than just barging on ahead.

-Shruti PalaskarPune Institute of Computer Technology

Pune

The US Department Of Defence has used 1760 PlayStation 3s to build a supercomputer, thinking it was the cheapest option. It has the ability to calculate 500 trillion floating point operations per second (TFLOPS). The project called Condor Cluster, has used 2 million dollars, saving a cost for a supercomputer of equivalent capacity by about 15-20 times.

Condor Cluster

CREDENZ.INFO/PING27 P.I.N.G. ISSUE 9.1. AUGUST 2013

Computational IntelligenceAn improved approach used for designing next-gen robotic intelligence

Jarvis alias ‘Just A Rather Very Intelligent System’ is Tony Stark’s right hand man (figuratively). He aids Tony in all of his endeavours, most

of which are sadly still dreams in many peoples’ lives. But, Jarvis is not one of them anymore. Chad Barafford who works for Apple has made sure of that by building a Jarvis for himself. His Jarvis or DIY- Digital Life Assistant listens and talks to him, monitors his house, keeps him up-to-date on current events like breaking news via instant

message, wakes him up in the mornings, tracks his financial transactions and the like. His Jarvis was coded using AppleScript and booted on a Mac Mini. It uses a radio-frequency-identification (RFID) tag reader, a home automation system wall speakers and a wireless microphone. To make JARVIS talk, Barraford developed a language interpreter system that uses MacSpeech Dictate, program which converts speech into text so that Jarvis can interpret it. Jarvis has an alarm clock system and daily weather report ready before he wakes him. To watch over the house, Jarvis uses RFID tag readers to know when somebody was in the house and according to the occupant changes the environment within the house.

All this is possible due to the branch of Computational Intelligence (CI). Computational Intelligence is a collection of computer-based methodologies and approach algorithms that are used to solve complex real-world problems with a very unique way which does not include first-principles and cannot fit into statistical modelling, as those would deem ineffective. It maps these complex problems to some similar problems which are solved by the nature i.e. biologically inspired algorithms and then build similar solutions. An important aspect of CI is adaptivity. Neural networks, a sub-branch of CI mimic the human brain and represent a computational mechanism based on a simplified mathematical model of the neurons and signals that they process.

The ability to think has been the most fascinating feature in robots. Robotic intelligence is classified into six categories: cognitive intelligence, social intelligence, behavioural intelligence, ambient intelligence, collective intelligence and genetic intelligence. Most recent progress in this field of robot’s intelligence is IBM’s computer WATSON. The most complex and sophisticated part of its programming was that it learned from its mistakes. The robot intelligence can be realized through Intelligence Technology (IT) using Intelligence Operating Architecture (IOA) in which computational intelligence (CI) is embedded. The IT is based on the Mechanism of Thought (MoT) using CI embedded in the IOA. To make RTT deliberately interact with the surrounding environment like

Page 16: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 30

The BIOS called ‘Life’The amalgamation of Bio and Tech

Bios - a word common to both computer science and biological sciences. The word has different meanings in both the sciences

but still implies something not so different. ‘Bios’ means ‘Life’ in Greek. The existence of every living thing, as per biological sciences, lies within a single cell. A single miniscule insignificant cell, made of nothing but protoplasm, coming together in millions to form what we call a living being. Whereas BIOS, Basic Input Output System is the software run by our computers on start up. An essential component needed for loading the OS as well as for testing the hardware components on initialization, we might as well call the BIOS as the existential necessity of a PC. So you see, ‘bios’ has been successfully carried through the ages into what life for us, now, is technology.

Biology has been overlapped with many sciences in the recent past, and continues to be used widely for the betterment of mankind. The two biological streams that have been shaped recently, thanks to

our never ending obsession with technology are Biotechnology and more recently, Bioinformatics. Biotechnology harnesses various bio molecular processes, systems and living beings to develop new processes, technologies and products that can be used to enhance humankind. It has grown over many years to now include fields

such as genomics, biomedical engineering, agricultural technology and many more.Various biotechnological processes like the method of fermentation of beer, making of

bread, manufacturing antibiotics such as Penicillin, through the discovery of the mold Penicillium by Alexander Fleming have all evolved over time.

The modern biotechnology era began in the 1970’s with the first successful experiment of gene splicing, which means cutting DNA of various organisms and then recombining them to form a new recombinant DNA with desired characteristics. This success led to the discovery of a genetically modified bacterium that broke down crude oil and hence could be used to treat oil spills in the sea. Determining drugs for the user as per his/her genetic makeup, gene therapy to detect defective genes, new vaccines that are developed from modified organisms, cloning; are all due to biotechnology. However, the most valuable initiative taken for overall improvement of present and future generations was the HGP (Human Genome Project). This project was responsible for generating gene sequencing for the entire human genetic information.

Bioinformatics is a field very close to computer science. It is concerned with the development of new software tools which will help converting the hoard of biological data into useful information. These software tools make it possible to store, retrieve, organize and update data easily. These tools are a combination of mathematics and discrete structures, image processing, database and query languages as well as other high level languages such as C++, C#, Java, Python, etc. Bioinformatics has made it possible to record gene sequences, protein patterns among others which have helped to study and design DNA structures and study the evolution of biological sciences.

Biotechnology and bioinformatics have made a valuable contribution in the evolution of all species, giving rise to an era of cutting edge technologies in medicine, genetics, nanotechnology, molecular and cellular biology, all with thehelp of curious, intuitive and determined engineers.

-Aboli MahajanPune Institute of Computer Technology

Pune

Thanks to the International Space Station’s high-speed Ku-band antenna, astronauts can now connect directly to the internet. The Crew Support LAN allows for a station laptop to control a grounded computer that is physically wired to the Internet whenever this connection is available.

Ku-band antenna

CREDENZ.INFO/PING29 P.I.N.G. ISSUE 9.1. AUGUST 2013

Hacking with USBInsights to the inconspicuous tools of plug-in hacking

Imagine if you could plug-in a Pen Drive into a Friend’s PC and be able to transfer all of their protected data within no time. Doesn’t that

sound cool?

Windows operating system stores most of the passwords which are used on a daily basis. These passwords may belong to one of the following categories:1. Windows OS passwords.2. Passwords stored by Browsers.3. Email accounts passwords.4. Network passwords.5. Disk passwords, etc.

Apart from this, Windows also stores sensitive information like IP addresses, password hashes and many more.

If the attacker has physical access to your windows machine, it is possible for them to instantly steal sensitive information like passwords, confidential documents, IP information and network information!

USB Switchblade and USB Hacksaw are such USB-based hacking tools developed by hak5.org, which can be used to accomplish this.

Hack_Tool_1# USB SwitchbladeThis tool exploits the special autorun loader on the virtual CD-ROM partition of a U3 compatible USB key, and secondly, tricks the user into running the autorun when choosing “Open folder to display files” upon insertion.

This tool requires Administrative privileges for systems running Windows XP, 2000 or 7.

The speciality of this tool is that it can run silently, without modifying the system or sending network traffic, making it incredibly stealth.

You may be wondering what USB Switchblade actually does. Among other things, it:1. Gathers IP Information2. Dumps Messenger Passwords3. Dumps the Network Passwords4. Dumps the SAM (System Passwords)5. Dumps the LSA Secrets (More System Passwords)6. Dumps the System’s Product Keys7. Dumps the URL History

Hack_Tool_2# USB HacksawThe USB Hacksaw is an advancement of the USB Switchblade that retrieves documents from USB drives plugged into the target machine and securely transmits them to an email account of the hacker, without the victim’s machine even realizing that it’s generating any network traffic.

How can we really defend ourselves from these tools?Anything can be hacked. However, a few minute steps can make your PC a little more secure in no time: 1. Disabling the Autorun feature in windows – This will ensure that the infected pen drive is not automatically read.2. Disabling the USB ports.

Though other such tools are available, these two dominate the domain of USB cracking. Hope you all think at least twice the next time you let your friends attach their pen-drives to your laptop.

-Kshitij KhakurdikarK. K. Wagh Institue of Engineering and Research,

Nashik

Page 17: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 32

The world is getting smaller and flatter. A staggering 50 billion devices – more than 7 times for every connected person, are

estimated to be existent presently. Though it has led to faster communication, faster transportation and faster collaboration, it has also led to the drastic need for evolution in the social fabric. Everyone is expected to be online, to be connected round the clock. Hence, it is the need of the hour that anything IP addressable should be able to seamlessly share content and collaborate with itself. And thus the notion of hyper-connectivity is born.

To comprehend this concept, consider this example: Expert doctors can examine patients from any corner of the world with this hyper-connection. Advanced person-to-machine type of communication can assist compatible machinery to input the required information about the patients. This data can be transmitted to a server via networks to the doctors, regardless of their physical location, thus eliminating boundary barriers.

In such an inter-webbed world, every transaction, every vehicle, every sensor reading and every click generates a tremendous amount of data. This generated data is characterized by the 3 V’s: Velocity-high speed of generation, Variety-diverse quality of data and Volume-large quantity. This ‘Big Data’ needs to be efficiently and rapidly transmitted between devices and so Cloud Computing comes in the picture. It is the perfect tool to manage, store and administer large amount of data. Cloud Computing enables sharing of data and resources, and thus requires well-suited networks of large groups of servers.

As we moved forward in the era of globalization, the expectations from our Internet Service Providers significantly changed. Voice telephony has expanded to Triple Play Services (Voice, Internet, and TV) and such a drastic change is achievable only by Next Generation Networks (NGN). This

encapsulated packet-based service relies heavily on IP as its network layer, with its core center as IP Multimedia Service (IMS) – an independent access platform for a variety of access technologies like GSM, 3G, Wi-Fi, cable and xDSL. This telecommunication architecture provides greater controllability, programmability and scalability than any of its predecessors, giving improved relay times in transmission of Big Data.

Many questions are asked about the transmission of this tremendous data as well as about the security of the cloud itself. Special efforts to avoid leaking of information and ensuring data security are hence, a must. Though complete security can never be achieved, the risks involved can always be reduced. As Ellen Richey rightly states, “A world that seemed transparent and promising becomes murky and threatening if we lose trust in the way data is used”.

The Hyper Connected World is no longer just a concept; we are already surrounded by it. It transcends over all physical limitations, establishes faster data mobility and connects one and all. Irrespective of its limitations, one thing is certain. The Hyper-Connected World is here to stay.

-The Editorial Team

Pathway to Global Cyber Resilience

the editorial featurespeaking what we spectate

The Hyper-connected World

CREDENZ.INFO/PING31 P.I.N.G. ISSUE 9.1. AUGUST 2013

Distributed Version Control SystemSolving consistency issues in globally spreaded projects

Although working solo can be simpler and uncomplicated, eventually it gets very tedious. Since a project has many files and

the developers might be located in different cities or even different continents, working together physically is practically impossible. What if you still want to work together on the same project, simultaneously? Enter Version Control.

A Distributed Version Control System will do everything it can to ease your work. Consider a scenario where a project named ProjectX on which an artist, a sound programmer and a developer want to work at the same time. The project already has a big codebase split across multiple files. The concerned directories in the source tree are : $src/audio - Audio developer, $src/images - Artist, $src/debug - Developer.

The current version of the project will be on some server. These project files on the server will act as the Central Repository. Each person who wants to work on this project will simply clone this central repository and end up with a local repository on his/her machine. Now, using the source code in their respective local repositories, they can make as many changes as they want. They can keep committing their changes onto the local repository to ensure they don’t lose track of them. Committing a change simply means adding it to the change-log of the local repository.

But if developers are still working independently,

why go through the hassle of setting up a VCS and not just download the source Tarball and hack away? Well, this is the beauty of Version Control. You can merge your changes with the changes made by other people working along with you. This is where the central repository will be used. Any person working on the project, with write access to the central repository, can ‘push’ his/her changes

to it. Once the central repository is updated, all the other people working on the same project can ‘pull’ these changes into their local repositories and continue working with the updated source code.

The changes might be reviewed by the original developers, so that a bad set of changes pushed by a developer doesn’t

break the source code of all the other developers pulling changes. Each commit is given a unique identification key so that we can identify the developer responsible for it and also prevent possibly confusing situations.

A few examples of contemporary version control systems are Git, Subversion(SVN) and Mercurial(Hg). As a matter of fact, the majority of open source softwares are a result of many developers working together from around the globe. The tool which enabled them to accomplish this? Version Control.

-Ashish GuptaPune Institute of Computer Technology

Pune

Page 18: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 34

I want technology to be the core competency. And as far as recession is considered and doing acquisitions, for an eight billion dollar company, it really doesn’t matter.

QConsidering that the Private Sector has better infrastructure, technology and funding than

the government, should waste water management be privatized?

AThis is a very debatable issue. Practically speaking, the government is required. In

India, the land is owned publically. The sewage water generated from our homes is much easier to treat than the industrial effluents, which is the actual challenge. A private company can’t own land in India. There has to be joint efforts for private companies to bring in technologies and money. The laws, the land, and other public assets will have to be regulated by the government. Hence, a PPP (Public Private Partnership), is required in India for leasing land and sharing the profit.

This collaboration is increasing. In Nagpur, the water distribution is being discussed to be handed over to a private company. Things take a long time to materialize in India. There are no barriers in terms of technologies. It is just the political and bureaucraticalprocessess that slow everything down.

QWater is about to enter in a period of crisis. On an industrial level, how should companies proceed

with waste water management?

AFirst and foremost, India uses water in 3 areas; industries, agriculture and domestic.

90% of the water is used for agricultural purpose. This is quite different from developed countries where industrial usage is significantly more. We surpass every country in water used per ton of food production. Israel used to suffer from perennial shortage of water. Today, it is completely independent of water issues because they have adapted advanced systems of energy and water conservation.

In India, agriculture is a politically linked issue. As far as India’s industry is considered, some

industries are changing. If you take ITC, one of the leading companies, it uses minimum water and carbon and measures its intake of water. On the other hand, family owned companies are least conscious about water usage and its wastage. Because of this, India is perhaps the world’s most polluted country.

CSR(Corporate Social Responsiblity) is basically undertaken for corporations to contribute to society but in this case, it won’t necessarily help. CSR has been made compulsory in the recently passed Companies Bill, but unless one is conscious about it and see to it that he does not discharge waste water into the river, there is not much hope.

QWhat is the scope of an IT engineer in Wipro Water?

AWhen we go into the next phase of analytics, the requirement of IT engineers will increase.

We currently need IT engineers to understand the logic of machineries and then write and maintain programs. Internet will also soon resolve the boundary issues of PLC or DCS. Going forward, companies will want all of their discreet systems to be integrated into one system. For this integration, we need IT people. We need IT solutions along with hardware and software for integration of wide applications, which none of the IT companies have. We need something to integrate the IT arm, the design arm and the business arm all into one. Ten years back, the water systems were not as automated as they are today. Also, the care for constituents of water was not there, but the need has arisen to do so.

QIs India ready to move on to completely non conventional sources of energy?

Nvidia’s most powerful grahics card ever: The Quadro K6000 features 12GB of GDDR5 graphics RAM to help it with incredibly complex scenes and 2,880 streaming multiprocessor (SMX) cores and support for four simultaneous displays at up to 4000 in resolution enabling the user a hyper-powerful overall performance.

Quadro K6000

CREDENZ.INFO/PING33 P.I.N.G. ISSUE 9.1. AUGUST 2013

InterviewBridging the rift

Mr Pinaki Bhadury is the Head of Business at WIPRO Water. He pursued engineering from BITS MESRA and

Business Management from University of Pune. An expert on Energy and Water, Mr Bhadury shares his personal experiences, and views on energy and water management.

QAfter completing graduation in mechanical engineering, you pursued post-graduation in

business management with specialization in marketing . What made you change your stream to management?

AWhile I started my career in engineering, I felt the need to study management science because

that helps in understanding how a business is run and various issues involved in it like business economics, world economics, data analysis, , decision making and how to use these inputs in working out strategies. I still use many of the tools of engineering that I learnt, in marketing. As you move up and get more involved in strategy, you have to learn how to operate a business by looking at business issues like administration, HR, finance, etc. As you get involved in these things, other aspects of business, like leadership and credibility are required along with technological knowledge, so that we can show the way forward. Management is about how to manage or use various resources at one’s disposal. You have to take decisions and give directions in management. It depends on when to take your own decision or judgment. That is management. But, even today, I would call myself more of an engineer rather than a marketer.

QIn the past 5 years, you changed your job thrice – From Thermax to Doshion Veolia to Frost and

Sullivan to Wipro. Why was that so?

AI get bored very easily. When I joined Thermax back in 1983, it was a small company and

hence there were new challenges, new issues and new opportunities almost everyday, so I never felt the need to change. When I overcame these challenges one by one, I felt good and that kept me

going for 26 years. By then, I had dabbled at all levels in energy - be it knowledge or development of different technologies, policy making or any other. Since I didn’t know much about water, it got my attention and attracted me, which made me join Doshion Veolia. After that I joined Frost and Sullivan, and being so different from any industry, it felt thrilling. Going forward, I realized that water was going to be an even bigger issue than energy. In Wipro Water, I saw the opportunity to implement my ideas and strategies related to water, so here I am.

QWipro acquired Aquatech in 2008, which was a period of financial crisis. What technologies did

WIPRO start implementing after acquiring it?

AAquaTech was a company which provided ultra pure water for pharmaceutical companies. The

quality of water they produced was very high and the systems were very small whereas what we are doing today for power plants is on a larger scale. We therefore required different technologies and different methods to be implemented. We started by implementing ZLD, that is Zero Liquid Discharge. In this, we could treat and reuse 85% of waste water for our own use, which meant we needed only 15%-20%of raw water. This led to the reduction of usage of fresh water, as we slowly shifted to waste water. We plan to add many more technologies, because

WIPRO Water’s Business Head talks about Going Green

Page 19: P.I.N.G. Issue 9.1

CREDENZ.INFO/PING AUGUST 2013 P.I.N.G. ISSUE 9.1. 36

1. This crazy idea will provide quality broadband service to billions living without it.6. A motion sensing input device developed by Microsoft for XBOX.9. A USB-based hacking tool named after a ballistic push-knife.12. A key to certify the authenticity of a copy of a program.19. The first bug to battle it was the Staphylococcus Aureus.20. A unit of quantum information.21. A US based e-commerce company owning a web based software that provides online and mobile payment service.22. Refers to the broad industry related to using computers in con-cert with telecommunications systems.23. Denotes ‘Life’ to an anthropologist, a firmware interface to a Computer Engineer.24. IEEE members get 24 hours to solve a set challenging Program-ming questions.

2. World’s favourite AI computer introduced by MARVEL comics. Fluent in sarcasm and possesses a British accent.3. A set of instructions indicating what actions should be taken by the system when a device is mounted.4. The process of generation of an object from powder through the process of atomic diffusion.5. Top layer of the cerebral hemisphere.7. A layer in the earth’s atmosphere which was the lair of the world’s fastest spy plane.8. A type of point defect in a crystal.10. Earlier meant “zero” in Arabic, used as an algorithm for encryp-tion and decryption.11. The company that commercialized the first rechargeable Li-ion batteries in the global market.13. An intrinsic form of angular momentum performed by elemen-tary particles.14. A physical chemist famous for his representation of electron pairs.15. A membrane-bound compartment which exists chloroplasts and cyanobacteria.16. Cryptocurrency of the internet.17. A 3rd generation robot that would provide service to a user at any place, any time and through any network.18. Region of air controlled by a particular country. Also name of the app store for Leap Motion apps.

Across Down

Answers to this crossword lie within the included articles

crossword

CREDENZ.INFO/PING35 P.I.N.G. ISSUE 9.1. AUGUST 2013

ANo, not only India but the entire world. About 67% of the world’s electricity is produced from

coal. The energy obtained from a small amount of coal is very high and to obtain the same amount of energy from a renewable source is very difficult and costly. So it’s a question of whether it is practically possible or not.

Coming to India, when India started generating electricity, oil was used for power generation. After oil became expensive in 1972, India shifted to coal. Today, 73% of Indian electricity comes from coal. To replace all of that with renewable energy is impossible. We do not have that much land for windmills, biomass grasslands or solar panels as Indian land is primarily used for food. This is the major challenge. Even many international bodies have now concluded that we will have to remain dependant on conventional sources.

QWhat about solar energy or nuclear energy? Apart from their high investment, they are unlimited

sources of energy.

ASolar is unlimited but you need to calculate how many hours of solar energy you get in a

day. Additionally, storing solar energy is a problem. It may be successful in the future after advances in battery technologies. But Solar Energy has its own limitations. The atmosphere cuts off 84% of the solar energy, so, only 16% is actually used in Solar based energy generation. Second limitation is that the Sun’s radiations are not uniform all over our country, hence, energy generation potential is not uniform through the country. As a matter of fact, the coastal locations in India have the lowest solar radiations and hence lowest Solar energy generation potential. One idea coming up is to beam down the energy generated from satellites down to earth through a wireless media. The solar panels on satellites are much smaller. So, for example, we can have the International Space Station in space beam down their energy through wireless media. As for nuclear energy, many people are passionate

about it, but we cannot ignore its dangers. The destruction cause by Hiroshima and Nagasaki was horrible and everyone is contemplating its threats now. France is one example, which used to generate 75% of energy from nuclear sources, but because of this danger they have started decommissioning their nuclear reactors. The world, currently, does not have the infrastructure to support such a delicate source of energy.

QThere is a gap between the Industry and the Academia. How can this gap be reduced?

AIn Industry, you need to have a practical

approach. First of all, students should maintain an active relationship with the Industry from the beginning. They should do internships and

interact with industry professionals throughout their curriculum. The course should compulsory include 6 months of classroom education and 6 months of internships, so that students can develop a practical approach from their first year itself and not just in their final year. Also, students can carry out projects in collaboration with the Industry and get insights about the internal workings. It gives students the perfect opportunity to apply the concepts they have learnt theoretically. Secondly, the syllabus should be set in partnership with the Industry and accordance with the Industry Requisites. Its coverage should be inclusive of present day technologies as well as their implementations in all aspects of the various fields.

QAny final advice for students pursuing their graduate courses?

AYou are far more aware and clear about what you want than we were. As long as you keep

pursuing your dream, there is no turning back. The world is there for you.

We thank Mr Pinaki Bhadury for his prodigious insights and valuable contributions to P.I.N.G.

“As long as you keep pursuing your dream, there

is no turning back.”

-The Editorial Team

Page 20: P.I.N.G. Issue 9.1

OFFICE BEARERS OF P.I.S.B. 2013-2014

Branch Counsellor: Dr Rajesh Ingle

Chairperson: Hrishikesh SarafVice Chairperson: Ajinkya Rajput

Treasurer: Kartik NagreVice Treasurer: Ritesh Porey

Secretary: Harshita Gangrade Ira RamtirthJoint-Secretary: Ashay Kamble Sidhesh Badrinarayan

Secretary of Finance: Chirag KadamJoint Secretary of Finance: Bhushan Charkha

VNL Head: Sphoorti Joglekar Yogesh ThosareVNL Team: Arpit Khandelwal Malhar Kulkarni

PRO Head: Apurwa Jadhav Shruti KuberPRO Team: Arundhati Sawant Yashaswini Kadam

Senior Design Team: Riddhi Kedia Tanvi Oka

Creative Head: Manali Desai

P.I.N.G. Head: Chaitrali Joshi Jai Chhatre Rahul BaijalP.I.N.G. Team: Saniya Kaluskar Shivani Naik Utkarsh MurarkaP.I.N.G. Designer: TanveerSingh Mahendra

Web Head: Lakshami Sharma Madhuri Gole Saurabh KulkarniWeb Team: Akshay Arlikatti Soham More Web Designer: Harsh Baheti

WIE Head: Harshita GangradeWIE Secretary: Chandrika Parimoo Rashmi Varma

Senior Council: Abhijeet Mahajan Aditya Gurjar Amogh Palnitkar Archit Doshi Deven Bawale Eshan Joshi Ifa Chiya Luv Varma Priya Avhad Rutugandha Bapat Sahil Uppal Tarun Gupta Vishakha Damle Vrushali Desale Yashashri Bhandari

Junior Council: Abhinav Kaul Aditi Baraskar Aman Nigam Aman Singh Anirudh Sudarshan Ankita Chiraniya Arijit Pande Ashish Gupta Avani Joshi Hrishikesh Pallod Ishita Mogra Janhavi Kulkarni Jeevjyot Singh Chhabda Kriti Kesarwani Kruti Chauhan Nishtha Kalra Parinita Matharu Prerit Auti Rahul Dhavlikar Rutuja Shah Rutvij Dhotey Saurabh Abhale Sharika Khurana Srishti Ganjoo Sudipto Chaterjee Vallari Anand Vikram Patil

Page 21: P.I.N.G. Issue 9.1

The PICT IEEE Student Branch (PISB) proudly celebrates the decade long success of our technical event, CREDENZ. Since it’s inception in 2004, CREDENZ has attracted, enraptured and enlightened its assemblage of students, teachers and experts with its technical prowess through its events

and competitions amidst a three-day fiesta.

CREDENZ ‘13 also witnesses the momentous occasion of the 25th anniversary of our Student Branch. The PISB team carries forward a proud and honorable lineage as we pay tribute to all before us, who have pillared

us to reach this level at which we gloriously stand today.