Category Archives: SOFTWARES-ROM

► Android (operating system) software‎ (6 C, 252 P)
► BlackBerry software‎ (3 C, 73 P)
► IOS software‎ (7 C, 276 P)
► Java device platform‎ (34 P)
► MeeGo software‎ (5 P)
► Nokia mobile software‎ (1 C, 3 P)
► Symbian software‎ (1 C, 43 P)
► Windows Mobile software‎ (4 C, 10 P)
*
► Cross-platform mobile software‎ (10 P)
► Free mobile software‎ (4 C, 28 P)
A
► Apps‎ (64 P)
B
► Bada software‎ (1 C, 6 P)
► Mobile business software‎ (7 P)
D
► Mobile software development‎ (2 C, 12 P)
► Mobile software distribution platforms‎ (17 P)
G
► Mobile games‎ (14 C, 547 P)
I
► Mobile instant messaging clients‎ (1 C, 2 P)
M
► Mobile device management software‎ (17 P)
O
► Mobile operating systems‎ (9 C, 30 P)
P
► Mobile software programming tools‎ (2 C, 17 P)
R
► Mobile route-planning software‎ (8 P)
S
► Mobile social software‎ (28 P)
W
► Mobile web browsers‎ (33 P)
► Windows Phone software‎ (2 C, 34 P)
Pages in category “Mobile software”

The following 142 pages are in this category, out of 142 total. This list may not reflect recent changes (learn more).
Mobile app
2
2ergo
A
ActivEcho
Adaptxt
Appcelerator Titanium
Appsbar
Appy Pie
Wikipedia:Articles for creation/Application Craft
B
Banjo (mobile application)
Blink (layout engine)
Binary Runtime Environment for Wireless
Mobile browser
C
CamperMate
Canditv
Capricode
Chatterfly
City ID
Comparison of Exchange ActiveSync clients
Corona (software development kit)
D
July Systems
Mobile application development
Device tracking software
Mobile dialer
Dict.cc
List of mobile software distribution platforms
Doorbot
DRONA Mobile
E
EasilyDo
EveryWAN Mobility Manager
Exchange ActiveSync
Exit Games
F
FOTA (technology)
G
Mobile game
GameChanger
GetTaxi
Good Technology
Google Now
Google Play
GroupLogic
H
Handheld video game
Hands-On Mobile
I
Imoblife
InnoPath Software
Intersog
ISiloX
ITyphoon
K
Kavapoint
Kinoma
L
Lango Messaging
List of Google products
List of GPS software for mobile phones
Lovegety
M
Mobile application management
Marmalade (software)
MiKandi
MOAP
Mobi-Mechanic
Mobi-Medic
Mobiflock
Mobile BASIC
Mobile Cloud Storage
Mobile software content rating system
Mobile Sorcery
Mobile Speak
Mobile virtualization
Mobile Web Server
MobilEcho
Mobinex
Mobiola
Mophun
Motoblur
Mpowerplayer
Mutual Mobile
MyMobileWeb
N
N-Gage (service)
Navicore
Nellymoser
Nielsen RingScan
Nokia Point & Find
Nokia Suite
O
On-Device Portal
Openmoko
ORCA (computer system)
P
Personal safety app
Pet Check Technology
PicDial
Mobile software platform
PlayPhone
PlugPlayer
Polaris Office
PrivacyStar
Project Houdini
Project Narwhal
Q
Qt Extended
Qt Extended Improved
Quickoffice
R
Radio Service Software
Runtastic
S
Secure Mobile Architecture
SeeClickFix
Sense Networks
Series 30 (software platform)
Series 40
Series 80 (software platform)
SHAPE Services
SimSimi
Sleipnir (web browser)
Snaptu
Sonic Boom, Inc.
Sony Ericsson Java Platform
SoulPad
Sports Tracker
SyncShield
System Center Mobile Device Manager
T
Tap for Tap
Tawkon
Todoist
ToneThis
TouchPal
Trazzler
Tristit Browser
TrueCaller
U
UAProf
United States Department of State panic button software
Universal Mobile Interface
UZard Web
V
Virtual Radio
W
Mobile wallpaper
WeatherBug (application)
WebKit
Whisper Systems
Wholesale Applications Community
WiDEN
WikiPock
WIPI
WMLScript
X
XHTML Mobile Profile
XT9
Y
Yahoo! Go
Z
Zitrr camera
Zlango
Zozoc

Google Throws Open Doors *Data Center*


Image

If you’re looking for the beating heart of the digital age — a physical location where the scope, grandeur, and geekiness of the kingdom of bits become manifest—you could do a lot worse than Lenoir, North Carolina. This rural city of 18,000 was once rife with furniture factories. Now it’s the home of a Google data center.

Engineering prowess famously catapulted the 14-year-old search giant into its place as one of the world’s most successful, influential, and frighteningly powerful companies. Its constantly refined search algorithm changed the way we all access and even think about information. Its equally complex ad-auction platform is a perpetual money-minting machine. But other, less well-known engineering and strategic breakthroughs are arguably just as crucial to Google’s success: its ability to build, organize, and operate a huge network of servers and fiber-optic cables with an efficiency and speed that rocks physics on its heels. Google has spread its infrastructure across a global archipelago of massive buildings—a dozen or so information palaces in locales as diverse as Council Bluffs, Iowa; St. Ghislain, Belgium; and soon Hong Kong and Singapore—where an unspecified but huge number of machines process and deliver the continuing chronicle of human experience.

This is what makes Google Google: its physical network, its thousands of fiber miles, and those many thousands of servers that, in aggregate, add up to the mother of all clouds. This multibillion-dollar infrastructure allows the company to index 20 billion web pages a day. To handle more than 3 billion daily search queries. To conduct millions of ad auctions in real time. To offer free email storage to 425 million Gmail users. To zip millions of YouTube videos to users every day. To deliver search results before the user has finished typing the query. In the near future, when Google releases the wearable computing platform called Glass, this infrastructure will power its visual search results.

The problem for would-be bards attempting to sing of these data centers has been that, because Google sees its network as the ultimate competitive advantage, only critical employees have been permitted even a peek inside, a prohibition that has most certainly included bards. Until now.

Image

Here I am, in a huge white building in Lenoir, standing near a reinforced door with a party of Googlers, ready to become that rarest of species: an outsider who has been inside one of the company’s data centers and seen the legendary server floor, referred to simply as “the floor.” My visit is the latest evidence that Google is relaxing its black-box policy. My hosts include Joe Kava, who’s in charge of building and maintaining Google’s data centers, and his colleague Vitaly Gudanets, who populates the facilities with computers and makes sure they run smoothly.

A sign outside the floor dictates that no one can enter without hearing protection, either salmon-colored earplugs that dispensers spit out like trail mix or panda-bear earmuffs like the ones worn by airline ground crews. (The noise is a high-pitched thrum from fans that control airflow.) We grab the plugs. Kava holds his hand up to a security scanner and opens the heavy door. Then we slip into a thunderdome of data …

Urs Hölzle had never stepped into a data center before he was hired by Sergey Brin and Larry Page. A hirsute, soft-spoken Swiss, Hölzle was on leave as a computer science professor at UC Santa Barbara in February 1999 when his new employers took him to the Exodus server facility in Santa Clara. Exodus was a colocation site, or colo, where multiple companies rent floor space. Google’s “cage” sat next to servers from eBay and other blue-chip Internet companies. But the search company’s array was the most densely packed and chaotic. Brin and Page were looking to upgrade the system, which often took a full 3.5 seconds to deliver search results and tended to crash on Mondays. They brought Hölzle on to help drive the effort.

It wouldn’t be easy. Exodus was “a huge mess,” Hölzle later recalled. And the cramped hodgepodge would soon be strained even more. Google was not only processing millions of queries every week but also stepping up the frequency with which it indexed the web, gathering every bit of online information and putting it into a searchable format. AdWords—the service that invited advertisers to bid for placement alongside search results relevant to their wares—involved computation-heavy processes that were just as demanding as search. Page had also become obsessed with speed, with delivering search results so quickly that it gave the illusion of mind reading, a trick that required even more servers and connections. And the faster Google delivered results, the more popular it became, creating an even greater burden. Meanwhile, the company was adding other applications, including a mail service that would require instant access to many petabytes of storage. Worse yet, the tech downturn that left many data centers underpopulated in the late ’90s was ending, and Google’s future leasing deals would become much more costly.

Image

For Google to succeed, it would have to build and operate its own data centers—and figure out how to do it more cheaply and efficiently than anyone had before. The mission was codenamed Willpower. Its first built-from-scratch data center was in The Dalles, a city in Oregon near the Columbia River.

Hölzle and his team designed the $600 million facility in light of a radical insight: Server rooms did not have to be kept so cold. The machines throw off prodigious amounts of heat. Traditionally, data centers cool them off with giant computer room air conditioners, or CRACs, typically jammed under raised floors and cranked up to arctic levels. That requires massive amounts of energy; data centers consume up to 1.5 percent of all the electricity in the world.

Data centers consume up to 1.5 percent of all the world’s electricity.

Google realized that the so-called cold aisle in front of the machines could be kept at a relatively balmy 80 degrees or so—workers could wear shorts and T-shirts instead of the standard sweaters. And the “hot aisle,” a tightly enclosed space where the heat pours from the rear of the servers, could be allowed to hit around 120 degrees. That heat could be absorbed by coils filled with water, which would then be pumped out of the building and cooled before being circulated back inside. Add that to the long list of Google’s accomplishments: The company broke its CRAC habit.

Google also figured out money-saving ways to cool that water. Many data centers relied on energy-gobbling chillers, but Google’s big data centers usually employ giant towers where the hot water trickles down through the equivalent of vast radiators, some of it evaporating and the remainder attaining room temperature or lower by the time it reaches the bottom. In its Belgium facility, Google uses recycled industrial canal water for the cooling; in Finland it uses seawater.

The company’s analysis of electrical flow unearthed another source of waste: the bulky uninterrupted-power-supply systems that protected servers from power disruptions in most data centers. Not only did they leak electricity, they also required their own cooling systems. But because Google designed the racks on which it placed its machines, it could make space for backup batteries next to each server, doing away with the big UPS units altogether. According to Joe Kava, that scheme reduced electricity loss by about 15 percent.

All of these innovations helped Google achieve unprecedented energy savings. The standard measurement of data center efficiency is called power usage effectiveness, or PUE. A perfect number is 1.0, meaning all the power drawn by the facility is put to use. Experts considered 2.0—indicating half the power is wasted—to be a reasonable number for a data center. Google was getting an unprecedented 1.2.

For years Google didn’t share what it was up to. “Our core advantage really was a massive computer network, more massive than probably anyone else’s in the world,” says Jim Reese, who helped set up the company’s servers. “We realized that it might not be in our best interest to let our competitors know.”

But stealth had its drawbacks. Google was on record as being an exemplar of green practices. In 2007 the company committed formally to carbon neutrality, meaning that every molecule of carbon produced by its activities—from operating its cooling units to running its diesel generators—had to be canceled by offsets. Maintaining secrecy about energy savings undercut that ideal: If competitors knew how much energy Google was saving, they’d try to match those results, and that could make a real environmental impact. Also, the stonewalling, particularly regarding The Dalles facility, was becoming almost comical. Google’s ownership had become a matter of public record, but the company still refused to acknowledge it.

In 2009, at an event dubbed the Efficient Data Center Summit, Google announced its latest PUE results and hinted at some of its techniques. It marked a turning point for the industry, and now companies like Facebook and Yahoo report similar PUEs.

Make no mistake, though: The green that motivates Google involves presidential portraiture. “Of course we love to save energy,” Hölzle says. “But take something like Gmail. We would lose a fair amount of money on Gmail if we did our data centers and servers the conventional way. Because of our efficiency, we can make the cost small enough that we can give it away for free.”

Google’s breakthroughs extend well beyond energy. Indeed, while Google is still thought of as an Internet company, it has also grown into one of the world’s largest hardware manufacturers, thanks to the fact that it builds much of its own equipment. In 1999, Hölzle bought parts for 2,000 stripped-down “breadboards” from “three guys who had an electronics shop.” By going homebrew and eliminating unneeded components, Google built a batch of servers for about $1,500 apiece, instead of the then-standard $5,000. Hölzle, Page, and a third engineer designed the rigs themselves. “It wasn’t really ‘designed,’” Hölzle says, gesturing with air quotes.

More than a dozen generations of Google servers later, the company now takes a much more sophisticated approach. Google knows exactly what it needs inside its rigorously controlled data centers—speed, power, and good connections—and saves money by not buying unnecessary extras. (No graphics cards, for instance, since these machines never power a screen. And no enclosures, because the motherboards go straight into the racks.) The same principle applies to its networking equipment, some of which Google began building a few years ago.Image

So far, though, there’s one area where Google hasn’t ventured: designing its own chips. But the company’s VP of platforms, Bart Sano, implies that even that could change. “I’d never say never,” he says. “In fact, I get that question every year. From Larry.”

Even if you reimagine the data center, the advantage won’t mean much if you can’t get all those bits out to customers speedily and reliably. And so Google has launched an attempt to wrap the world in fiber. In the early 2000s, taking advantage of the failure of some telecom operations, it began buying up abandoned fiber-optic networks, paying pennies on the dollar. Now, through acquisition, swaps, and actually laying down thousands of strands, the company has built a mighty empire of glass.

But when you’ve got a property like YouTube, you’ve got to do even more. It would be slow and burdensome to have millions of people grabbing videos from Google’s few data centers. So Google installs its own server racks in various outposts of its network—mini data centers, sometimes connected directly to ISPs like Comcast or AT&T—and stuffs them with popular videos. That means that if you stream, say, a Carly Rae Jepsen video, you probably aren’t getting it from Lenoir or The Dalles but from some colo just a few miles from where you are.

Over the years, Google has also built a software system that allows it to manage its countless servers as if they were one giant entity. Its in-house developers can act like puppet masters, dispatching thousands of computers to perform tasks as easily as running a single machine. In 2002 its scientists created Google File System, which smoothly distributes files across many machines. MapReduce, a Google system for writing cloud-based applications, was so successful that an open source version called Hadoop has become an industry standard. Google also created software to tackle a knotty issue facing all huge data operations: When tasks come pouring into the center, how do you determine instantly and most efficiently which machines can best afford to take on the work? Google has solved this “load-balancing” issue with an automated system called Borg.

These innovations allow Google to fulfill an idea embodied in a 2009 paper written by Hölzle and one of his top lieutenants, computer scientist Luiz Barroso: “The computing platform of interest no longer resembles a pizza box or a refrigerator but a warehouse full of computers … We must treat the data center itself as one massive warehouse-scale computer.”

This is tremendously empowering for the people who write Google code. Just as your computer is a single device that runs different programs simultaneously—and you don’t have to worry about which part is running which application—Google engineers can treat seas of servers like a single unit. They just write their production code, and the system distributes it across a server floor they will likely never be authorized to visit. “If you’re an average engineer here, you can be completely oblivious,” Hölzle says. “You can order x petabytes of storage or whatever, and you have no idea what actually happens.”

But of course, none of this infrastructure is any good if it isn’t reliable. Google has innovated its own answer for that problem as well—one that involves a surprising ingredient for a company built on algorithms and automation: people.

At 3 am on a chilly winter morning, a small cadre of engineers begin to attack Google. First they take down the internal corporate network that serves the company’s Mountain View, California, campus. Later the team attempts to disrupt various Google data centers by causing leaks in the water pipes and staging protests outside the gates—in hopes of distracting attention from intruders who try to steal data-packed disks from the servers. They mess with various services, including the company’s ad network. They take a data center in the Netherlands offline. Then comes the coup de grâce—cutting most of Google’s fiber connection to Asia.

Turns out this is an inside job. The attackers, working from a conference room on the fringes of the campus, are actually Googlers, part of the company’s Site Reliability Engineering team, the people with ultimate responsibility for keeping Google and its services running. SREs are not merely troubleshooters but engineers who are also in charge of getting production code onto the “bare metal” of the servers; many are embedded in product groups for services like Gmail or search. Upon becoming an SRE, members of this geek SEAL team are presented with leather jackets bearing a military-style insignia patch. Every year, the SREs run this simulated war—called DiRT (disaster recovery testing)—on Google’s infrastructure. The attack may be fake, but it’s almost indistinguishable from reality: Incident managers must go through response procedures as if they were really happening. In some cases, actual functioning services are messed with. If the teams in charge can’t figure out fixes and patches to keep things running, the attacks must be aborted so real users won’t be affected. In classic Google fashion, the DiRT team always adds a goofy element to its dead-serious test—a loony narrative written by a member of the attack team. This year it involves a Twin Peaks-style supernatural phenomenon that supposedly caused the disturbances. Previous DiRTs were attributed to zombies or aliens.

Some halls in Google’s Hamina, Finland, data center remain vacant—for now.
Photo: Google/Connie Zhou

As the first attack begins, Kripa Krishnan, an upbeat engineer who heads the annual exercise, explains the rules to about 20 SREs in a conference room already littered with junk food. “Do not attempt to fix anything,” she says. “As far as the people on the job are concerned, we do not exist. If we’re really lucky, we won’t break anything.” Then she pulls the plug—for real—on the campus network. The team monitors the phone lines and IRC channels to see when the Google incident managers on call around the world notice that something is wrong. It takes only five minutes for someone in Europe to discover the problem, and he immediately begins contacting others.

“My role is to come up with big tests that really expose weaknesses,” Krishnan says. “Over the years, we’ve also become braver in how much we’re willing to disrupt in order to make sure everything works.” How did Google do this time? Pretty well. Despite the outages in the corporate network, executive chair Eric Schmidt was able to run a scheduled global all-hands meeting. The imaginary demonstrators were placated by imaginary pizza. Even shutting down three-fourths of Google’s Asia traffic capacity didn’t shut out the continent, thanks to extensive caching. “This is the best DiRT ever!” Krishnan exclaimed at one point.

The SRE program began when Hölzle charged an engineer named Ben Treynor with making Google’s network fail-safe. This was especially tricky for a massive company like Google that is constantly tweaking its systems and services—after all, the easiest way to stabilize it would be to freeze all change. Treynor ended up rethinking the very concept of reliability. Instead of trying to build a system that never failed, he gave each service a budget—an amount of downtime it was permitted to have. Then he made sure that Google’s engineers used that time productively. “Let’s say we wanted Google+ to run 99.95 percent of the time,” Hölzle says. “We want to make sure we don’t get that downtime for stupid reasons, like we weren’t paying attention. We want that downtime because we push something new.”

Nevertheless, accidents do happen—as Sabrina Farmer learned on the morning of April 17, 2012. Farmer, who had been the lead SRE on the Gmail team for a little over a year, was attending a routine design review session. Suddenly an engineer burst into the room, blurting out, “Something big is happening!” Indeed: For 1.4 percent of users (a large number of people), Gmail was down. Soon reports of the outage were all over Twitter and tech sites. They were even bleeding into mainstream news.

The conference room transformed into a war room. Collaborating with a peer group in Zurich, Farmer launched a forensic investigation. A breakthrough came when one of her Gmail SREs sheepishly admitted, “I pushed a change on Friday that might have affected this.” Those responsible for vetting the change hadn’t been meticulous, and when some Gmail users tried to access their mail, various replicas of their data across the system were no longer in sync. To keep the data safe, the system froze them out.

The diagnosis had taken 20 minutes, designing the fix 25 minutes more—pretty good. But the event went down as a Google blunder. “It’s pretty painful when SREs trigger a response,” Farmer says. “But I’m happy no one lost data.” Nonetheless, she’ll be happier if her future crises are limited to DiRT-borne zombie attacks.

One scenario that dirt never envisioned was the presence of a reporter on a server floor. But here I am in Lenoir, earplugs in place, with Joe Kava motioning me inside.

We have passed through the heavy gate outside the facility, with remote-control barriers evoking the Korean DMZ. We have walked through the business offices, decked out in Nascar regalia. (Every Google data center has a decorative theme.) We have toured the control room, where LCD dashboards monitor every conceivable metric. Later we will climb up to catwalks to examine the giant cooling towers and backup electric generators, which look like Beatle-esque submarines, only green. We will don hard hats and tour the construction site of a second data center just up the hill. And we will stare at a rugged chunk of land that one day will hold a third mammoth computational facility.

But now we enter the floor. Big doesn’t begin to describe it. Row after row of server racks seem to stretch to eternity. Joe Montana in his prime could not throw a football the length of it.

During my interviews with Googlers, the idea of hot aisles and cold aisles has been an abstraction, but on the floor everything becomes clear. The cold aisle refers to the general room temperature—which Kava confirms is 77 degrees. The hot aisle is the narrow space between the backsides of two rows of servers, tightly enclosed by sheet metal on the ends. A nest of copper coils absorbs the heat. Above are huge fans, which sound like jet engines jacked through Marshall amps.

The huge fans sound like jet engines jacked through Marshall amps.

We walk between the server rows. All the cables and plugs are in front, so no one has to crack open the sheet metal and venture into the hot aisle, thereby becoming barbecue meat. (When someone does have to head back there, the servers are shut down.) Every server has a sticker with a code that identifies its exact address, useful if something goes wrong. The servers have thick black batteries alongside. Everything is uniform and in place—nothing like the spaghetti tangles of Google’s long-ago Exodus era.

Blue lights twinkle, indicating … what? A web search? Someone’s Gmail message? A Glass calendar event floating in front of Sergey’s eyeball? It could be anything.

Every so often a worker appears—a long-haired dude in shorts propelling himself by scooter, or a woman in a T-shirt who’s pushing a cart with a laptop on top and dispensing repair parts to servers like a psychiatric nurse handing out meds. (In fact, the area on the floor that holds the replacement gear is called the pharmacy.)

How many servers does Google employ? It’s a question that has dogged observers since the company built its first data center. It has long stuck to “hundreds of thousands.” (There are 49,923 operating in the Lenoir facility on the day of my visit.) I will later come across a clue when I get a peek inside Google’s data center R&D facility in Mountain View. In a secure area, there’s a row of motherboards fixed to the wall, an honor roll of generations of Google’s homebrewed servers. One sits atop a tiny embossed plaque that reads JULY 9, 2008. GOOGLE’S MILLIONTH SERVER. But executives explain that this is a cumulative number, not necessarily an indication that Google has a million servers in operation at once.

Wandering the cold aisles of Lenoir, I realize that the magic number, if it is even obtainable, is basically meaningless. Today’s machines, with multicore processors and other advances, have many times the power and utility of earlier versions. A single Google server circa 2012 may be the equivalent of 20 servers from a previous generation. In any case, Google thinks in terms of clusters—huge numbers of machines that act together to provide a service or run an application. “An individual server means nothing,” Hölzle says. “We track computer power as an abstract metric.” It’s the realization of a concept Hölzle and Barroso spelled out three years ago: the data center as a computer.

As we leave the floor, I feel almost levitated by my peek inside Google’s inner sanctum. But a few weeks later, back at the Googleplex in Mountain View, I realize that my epiphanies have limited shelf life. Google’s intention is to render the data center I visited obsolete. “Once our people get used to our 2013 buildings and clusters,” Hölzle says, “they’re going to complain about the current ones.”

Asked in what areas one might expect change, Hölzle mentions data center and cluster design, speed of deployment, and flexibility. Then he stops short. “This is one thing I can’t talk about,” he says, a smile cracking his bearded visage, “because we’ve spent our own blood, sweat, and tears. I want others to spend their own blood, sweat, and tears making the same discoveries.” Google may be dedicated to providing access to all the world’s data, but some information it’s still keeping to itself.

Google Announces Project Tango Smartphone With 3D Sensors That Can Map Your Environment


Image

Google has announced Project Tango, a 5-inch smartphone containing customized hardware and software designed to track the full 3D motion of the device while simultaneously creating a map of your environment.

These sensors allow the phone to make over a quarter million 3D measurements every second, updating its position and orientation in real-time, combining that data into a single 3D model of the space around you.

The smartphone runs and Android and includes development APIs to provide position, orientation, and depth data to standard Android applications written in Java, C/C++, as well as the Unity Game Engine.

Google imagines the phone being used to capture the dimensions of your home simply by walking around with your device before you went furniture shopping. Or perhaps, directions to a location could continue beyond its street address, preventing you from ever being lost in a new building. It could help the visually-impaired navigate unfamiliar places. It might assist you in finding the exact shelf a product is one in a super-store.

Imagine playing hide-and-seek in your house with your favorite game character, or transforming the hallways into a tree-lined path. Imagine competing against a friend for control over territories in your home with your own miniature army, or hiding secret virtual treasures in physical places around the world?

Google is letting developers apply to be one of the first to get a prototype of the Project Tango smartphone.

Currently, we have 200 prototype dev kits. We have allocated some of these devices for projects in the areas of indoor navigation/mapping, single/multiplayer games that use physical space, and new algorithms for processing sensor data. We have also set aside units for applications we haven’t thought of yet. Tell us what you would build. Be creative. Be specific. Be bold.

Firefox OS Expands to Higher-Performance Devices


Mozilla has announced that its Firefox OS is expanding to higher-performance smartphones and tablets.

Image

Mozilla has announced that its Firefox OS is expanding to higher-performance smartphones and tablets.

Today, device partners ALCATEL ONETOUCH, Huawei, LG and ZTE are all using Firefox OS on a broad range of smartphones that are tailored for different types of consumers. The Firefox OS devices unveiled today showcase dual-core processors for better performance, higher screen resolution and more. The newest Firefox OS devices to join the family include the ZTE Open C and Open II, Alcatel ONETOUCH Fire C, Fire E, Fire S and Fire 7 tablet, all using Snapdragon processors from Qualcomm Technologies Inc., a leader in mobile communications

Upcoming versions of Firefox OS will offer users new features and services including new and intuitive navigation, a powerful universal search feature, support for LTE networks and dual SIM cards, easy ways to share content, ability to create custom ringtones, replaceable home screens and Firefox Accounts.

Coming next for Firefox OS:
● Deep customization options for operators and manufacturers, developers and users. This includes the ability to create custom ringtones and replaceable home screens, which were direct requests from Firefox OS users.
● A new universal search that will revolutionize how users discover content on their phones. The feature is available on any screen – simply swipe down from the top to find new apps, content or navigate to anything on the phone or the Web.
● New navigation features to make multitasking intuitive, fluid and smart, much like how users interact with the Web. Users can easily swipe from the left and right edges to seamlessly move between pages, content and apps in a fun way that saves time.
● Easy and direct sharing of content (and even software updates) in a secure way with NFC support, without the need for data or Wifi.
● LTE support to make the mobile experience even faster.
● Firefox OS will introduce Firefox Accounts and services. Firefox Accounts is a safe and easy way for users to create an account that enables them to sign in and take Firefox everywhere. With Firefox Accounts, Mozilla can better integrate services including Firefox Marketplace, Firefox Sync, backup, storage, or even a service to help locate, message or wipe a phone if it were lost or stolen.Image

NOKIA X PRICE IS AROUND 7.5 K AND X + price would be 8500 and nokia xl is around 9 to 10 k


ImageImage

After much heavy rumor, Nokia has just announced that it’s launching its first Android handsets: the X, X+ and XL.

All the phones are built on the open source Android OS forked especially for Nokia. The X and X+ feature a 4-inch screen, while the XL packs a 5-inch IPS display. The X+ is differentiated from the X by extra memory and expandable storage, though it’s not clear quite what that means in terms of specs.

During the Mobile World Congress presentation, Steve Elop explained that users will “benefit from the Android apps and ecosystem, but we have differentiated.” Essentially that means that there will plenty of Microsoft and Nokia apps included from the get go. Skype, for instance, will be preinstalled, and offer users 1 month of free calls to landlines and mobiles, and Nokia’s navigation apps will feature, too.

More importantly, the phones take people to Microsoft’s cloud, not Google’s. Indeed, it seems Nokia is distancing itself from Google as much as possible with these Android devices, and Elop went as far as saying that the “Nokia X together with Lumia represents a deliberate strategy to leverage Microsoft services.” There will, though, be plenty—”hundreds of thousands at launch,” apparently—of conventional Android apps available through a Nokia-specific app store.

ImageImage

Price, you ask? Well, Steve Elop was keen to point out that the X range is designed to be more affordable than the Lumia range, both now and in the the future. The phone will be “broadly available globally”, starting in growth markets, and they’ll cost $125 for the X, $135 for the X+, and $150 for the XL

.ImageImageImageImageImageImage

Confirmed: Samsung global will announce the Galaxy S5 later this month. report :-akki


Image

Samsung recently sent out invitations to a press conference that will take place ahead of the annual Mobile World Congress trade show in Barcelona later this month. Dubbed Unpacked 5, the presence of “5″ in the event’s title caused the tech press to jump to the conclusion that Samsung’s next flagship phone, the Galaxy S5, would debut at the event. That notion remained speculation until Wednesday morning, when The New York Times confirmed that Samsung’s new Galaxy S5 will in fact debut at the company’s February 24th press conference.

The report also notes that Samsung should and will shift its focus away from gimmicky features in the Galaxy S5.

Hmm, where have we heard that advice before?

Samsung’s new Galaxy S5 is expected to feature a new AMOLED display with 2K resolution that measures about 5.2 inches diagonally. Reports also suggest the phone will include a quad-core Snapdragon processor or an octa-core Samsung Exynos chipset depending on region, 3GB of RAM, 32 or 64GB of memory, a 16-megapixel camera, a 3.2-megapixel front-facing camera, a 3,200 mAh battery and Android 4.4 KitKat.

The NYT also says that Samsung will unveil a sequel to the Galaxy Gear smartwatch during its event later this month.

SATYA NADELLA MIGHT BE BAD NEWS FOR YOUNG INDIA……. HERE’S WHY


Image

only the third CEO in 38-year-old Microsoft’s lifetime, 47-year-old Satya Nadella is being lauded by the American and Indian press alike.. I am, in fact, sweating a couple of jugs for what Satya’s elevation as Microsoft’s new CEO will amount to in a few weeks.

Here is my biggest grief. Satya Nadella will be used as an example by Indian parents to tell their children that engineering is not dead and can indeed take you places such as the CEO’s chair at Microsoft. At a time when the creative arts need a fillip and our historical heritage as well as cultural mores needs additional modes of expression, this is a dangerous thing.

Image

Wonder what the new CEO will be taking home? According to a recent regulatory filing by Microsoft with the SEC, Nadella’s annual salary has been increased to $1.2 million.

© Twitter

As a professor of journalism at a top Mumbai college, I have had parents come up to me confidentially and tell me that they did not approve of their wards take up subjects as journalism, advertising, PR etc. To them, these are not real streams of education but more of a hobby class that needs to be tolerated for a finite amount of time. Most parents secretly hope their kids will realise their folly and go back to engineering just as they had envisioned. It is only when realisation finally hits that they come and ask me about the prospects in these new fields.

Satya’s bachelor of engineering in electronics and communication degree from Manipal Institute of Technology and MS in Computer Science from the University of Milwaukee-Wisconsin as well as an MBA from the University of Chicago Booth School of Business will be the new benchmark for teenagers hereon.

Image

© microsoft

If only Satya himself had something truly revolutionary with his own career then things might have been a whole lot different. In fact, he could have been the new poster boy of a young India hungry for success across the world. But Satya himself chose to lead a simple and staid life, putting in the years at Microsoft by putting in all his hard work before his elevation to CEO. That is not path-breaking but a routine way of getting to the top.

You could call my analysis baseless paranoia and I would gladly like to be proved wrong but history is a better teacher than we credit it for. When I was studying in the last decade, almost 90 percent of my schoolmates went to engineering colleges after their junior college. Most were just following the family tradition and did not know what to make of their careers while doing engineering.

A good student went to a university in California to study computer science but is now working for his father’s interior design company after dropping out mid-way through the course. A brilliant student went into depression after he could not cope with the accumulated KTs and non-performance and dropped out to take banking exams. They were driven by their parents to emulate another Indian who had made headlines by joining hands with Microsoft to offer a revolutionary email service to the world called Hotmail. His name was Sabeer Bhatia.

The new generation of students have a new talisman to emulate now and his name is Satya Nadella.

Google’s new wearable project is a smart contact lens with medical uses.


Google’s new wearable project is a smart contact lens with medical uses
ImageGoogle loves wearables and this time it’s getting even closer to your body with a developmental smart contact lens. Through miniaturized electronics, it can apparently measure the levels of glucose in your tears, offering diabetics an easier way to monitor their condition without the needles and the blood — something we’ve reported on a several before. A tiny (really tiny) wireless chip and glucose sensor are wedged between two layers of “biocompatible” contact lens material, and Google is saying that it’s already working on embedding tiny LED lights for notifications, too. There’s been no shortage of developmental contact lens tech over the last few years, but the clout of Google means this could well be the most realistic mainstream offering, in addition to its very practical use cases. Google is currently angling for partners with more expertise in the medical market to help make it happen and is “in discussions with the FDA” to ensure the tech ticks all the right healthcare boxes before it progresses further.

Recode’s got a deep dive on the make-up of the smart contact: we’ve added their science textbook-grade diagram right after the break.

Akshay mathur

Update Galaxy Tab 7.0 Plus P6200 to Android 4.1.2 Jelly Bean Official Firmware


XXMC3 Android 4.1.2 Jelly Bean official firmware for Galaxy Tab 7.0 Plus P6200 is now available for download. You can get this latest Jelly Bean firmware for the Galaxy Tab 7.0 Plus from Samsung KIES, or if you can’t get this update for your region, you can always follow our easy tutorial below for manually updating your tablet with this new firmware using Odin. P6200XXMC3 was just released earlier today so it will be rolled out to all the regions in a few days/weeks. This is an unbranded firmware and can be installed on any Galaxy Tab 7.0 Plus. You can install XXMC3 Android 4.1.2 Jelly Bean official firmware on Galaxy Tab 7.0 Plus P6200 now using the tutorial below.

As you read further, we will guide you through the entire process of how to update Galaxy Tab 7.0 Plus P6200 to XXMC3 Android 4.1.2 official firmware using ODIN. Make sure youbackup all your data from the given tools below as a precaution and don’t forget to read the important tips given below as you have to keep them in mind. You never know when something might go wrong. The following tips are important as they will help the installation procedure to go smooth without any issues. Let’s continue with the tutorial below.

Image

Disclaimer: All the custom ROMs and firmwares, official software updates, tools, mods or anything mentioned in the tutorial belong to their respective owners/developers. We (TeamAndroid.com) or the developers are not to be held responsible if you damage or brick your device. We don’t have you on gun point to try out this tutorial ;-)

XXMC3 Android 4.1.2 Firmware Details:

PDA: P6200XXMC3
CSC: P6200OXAMC3
Version: 4.1.2
Date: 2013 March
Regions:

If you plan on rooting this tablet or are still confused for why to root Galaxy Tab 7.0 Plus P6200, read: Benefits of Rooting Your Android Device.

Samsung Galaxy Tab 7.0 Plus USB Drivers

You will need to connect your Android tablet with the computer. For that, please make sure you have installed the Android 4.1.2 USB drivers for Samsung Galaxy Tab 7.0 Plus properly. If not, you can download the latest official drivers from our Android USB Drivers section here:

Download Samsung Galaxy Tab 7.0 Plus USB drivers!

Backup and Other Important Tips

Done with the USB drivers? Perfect. The tutorial is on the next page, but first, please take a look at the following tips and important points you need to take care of. These are important, as we don’t want anyone losing their personal data or apps:

Always backup your important data that you might need after you install a new custom ROM, an official software update or anything else. Make a backup for just in case, you never know when anything might go wrong. See below for some tips on how to backup data:

  • Backup your Apps. How? –> How to Backup Android Apps.
  • Backup your SMS messages. How? –> How to Backup/Restore SMS Messages.
  • Backup Contacts, Gmail and Google Data. How? –> Sync with Google Sync.
  • Backup Call History. How? –> How to Backup Call History.
  • Backup WhatsApp Messages. How? –> How to Backup/Restore WhatsApp Messages.
  • Backup APN Settings: GPRS, 3G, MMS Settings. How? Note down everything from: Settings > Wireless & Networks (More…) > Mobile networks > Access Point Names.

Samsung users can also back up data using Samsung KIES. If you backup datamanually, you get more options what to choose from and it is very easy to move data across Android devices from different manufacturers, i.e moving Samsung Galaxy Note backup data to HTC One X.

If you already have a custom recovery (ClockworkMod, TWRP etc.) installed on your tablet, we strongly recommend you to also backup using that as it creates a complete image of your existing tablet set up.

A few more optional tips that you should remember:

1. Having problems connecting your Android tablet to the computer? You need to enable USB debugging mode. See here: How to Enable USB Debugging — for Gingerbread, Ice Cream Sandwich and Jelly Bean.

2. Make sure your Android device is charged up to 80-85% battery level. This might help you: How to Check Battery Percentage. Why? Because, if your tablet goes off suddenly while installing a custom ROM, flashing an official firmwareupdate or installing mods etc. — your tablet might get bricked or go dead permanently. No one wants that, right?

4. Most of the tutorials and how-to guides on Team Android are for factory unlocked Android phones and tablets. We recommend NOT to try our guides if your tablet is locked to a carrier, unless we have specified the carrier name or device model.

If you find the above tips useful and they were helpful to you, please consider giving us a +1or LIKE to thank us!

All set and ready? Good. Now, let’s proceed with the tutorial on the next page and updateGalaxy Tab 7.0 Plus P6200 to XXMC3 Android 4.1.2 Jelly Bean firmware.

Nokia Lumia 1520, the 6-inch Windows Phone.


Nokia Lumia 1520
Nokia Lumia 1520 (Photo credit: Janitors)

NOKIA 1520-AKSHAYImage

big screen phone, 5.5-inches and up, Windows Phone users had been limited to the 5-inch limit. However, Nokia’s Lumia 1520 changes things a great deal. The first true big screen smartphone running Windows Phone Black was shown off the media in India, today. Nokia has set the Best Buy price for the Lumia 1520 at Rs 46,999. I had the chance to use the device for a while, and the results are extremely impressive.

The 6-inch screen: The IPS factor
There have always been the fans of the big screen phones, citing tasks like typing, web browsing and games are a lot of comfortable on a big screen size. Equally, there are the critics, who immediately point to the sacrifice of the single hand usability as one of the reasons for not upgrading. Being subjective as it is, I cannot not be impressed with the Lumia 1520. The Full HD IPS display features Nokia’s ClearBlack technology, also seen on some of the earlier Lumia phones as well. The 1520’s IPS panel has purer whites and darker blacks, which makes the 1020’s screen tone look warmer in comparison. The LG G2 has one of the best IPS displays we have seen in a smartphone in a long time, but this could run it close when we do the detailed testing.

Power Package: The mistakes have been corrected
The Lumia 1520 corrects the mistakes of the 1020, primarily the power package. This features the quad-core 2.2GHz Qualcomm Snapdragon 800 processor, which powers the fastest Android phones out there, including the Samsung Galaxy Note 3. With 2GB of RAM to assist, performance will not be an issue.

Camera: On paper, the Lumia 1020 is a slightly better bet
The Lumia 1520 gets the PureView family snapper, but this is a 20MP one instead of the 41MP clicker that the Lumia 1020 came with. This camera will capture a low-resolution, 5-megapixel shot and a full-resolution photo at the same time. However, low light performance of the 1520 may not be as good as the Lumia 1020, considering the smaller 1/2.5in sensor size, compared to the elder siblings 2/3in sensor. We will put this camera through a detailed test, to understand the performance differences better.

Image

samsung S5: rumors on the upcoming Samsung Galaxy S5


Image

samsung S5: Wild, wicked and crazy rumors on the upcoming Samsung Galaxy S5

Will it have an IRIS scanner, will it have a flexible display, will it go 64-bit. We have a list of rumors going around, feel free to add yours…