To illustrate how data is playing a growing role in todays flight booking engines I've broken down play by play how each individual piece of data collected about you can be used, analysed and overlaid with other data sets to paint a picture of who you are, what motivates and drives you to purchase a specific product. Every day – trillions of calculations are being number crunched to transform this goldmine of data opportunity into real, tangible high revenue opportunities for the airlines and their frequent flyer programs. When armed with key insights , a holistic overview of yours, and other customers’ detailed profiled information can be applied to direct booking channels which are designed to customize pricing for your personal situation at that very given moment.
As of today Parse Push for our .NET SDK works on WinRT, Silverlight, .NET 4.5, Windows Phone, Xamarin iOS and Android, and Unity iOS and Android. As you may have noticed, Push works differently on each platform – even between Windows 8 and Windows Phone 8. First, registering for push notifications is a different experience for each platform. On Windows and Windows Phone, you have to store the channel URI to listen to the push. With Android, you have to request a GCM registration ID. On iOS, you have to register for remote notification. But everything boils down to one problem: How can you uniquely identify this device so that the Push Server can send a targeted push to this device? In the Parse SDK, we simplify this problem by saving a ParseInstallationobject that contains special fields which enable push notifications for that device.
“The most important strength women can bring to the workplace is just being a woman in the workplace. I don’t want to be treated differently as a woman, but we do need to raise awareness about diversity,” said Whitney. ... He explained that he aims to introduce a diversity strategy in the next three months at the BBC, which will use recruitment targets to ensure women and under-represented groups are on shortlists for jobs, with the hope that those looking for staff will cast the net wider to find skilled workers. “We’re not talking about positive discrimination here, but we’re saying you haven’t looked hard enough if all the people you’ve brought to that interview are the same,” said Ogungbesan.
"There is a great need for software engineers worldwide, and in the US particularly," said Julien Barbier, co-founder and CEO of the Holberton School in a statement. "Holberton School uses a proven system that more closely replicates real-world employment. In this project-based and peer learning system there are no formal teachers and no formal courses. Instead, everything is project-centered." Specifically, Barbier, formerly a senior director at Docker, continued, "Students have to solve increasingly difficult programming challenges, with minimal initial directions about how to solve them. As a consequence, students naturally look for the theory and tools they need, understand them, use them, work together, and help each other. And, by the way, they love it -- I know because I am a graduate of the same system."
It's impossible to tell if the new flaws discovered by Forshaw were introduced intentionally or not, but they do show that despite professional code audits, serious bugs can remain undiscovered. The first phase of the TrueCrypt audit project, performed by security engineers from iSEC Partners, a subsidiary of information assurance company NCC Group, covered the driver code, but "Windows drivers are complex beasts" and it's easy to miss local elevation of privilege flaws, Forshaw said on Twitter. The Google researcher hasn't disclosed details about the two bugs yet, saying that he usually waits seven days after a patch is released to open his bug reports. Since TrueCrypt is no longer actively maintained, the bugs won't be fixed directly in the program's code.
Data can be worked on in real-time as it is coming off the Web. Users will be working with the data in its native format. "You are not depending on IT to set up the schema," Norris said. "This adds to the capability of how data is stored." The community version of MapR-DB with JSON will enable users to test and develop their own apps on the platform, Norris said. "When it becomes part of the business backbone, the company will be glad to pay for the enterprise features," he said. That version will have the governance and security features needed for corporate IT use. The addition of JSON support to MapR's product line is yet another step of adding utility to Hadoop's capability. Last month, MapR announced it was integrating its Hadoop Distribution 5.0 service with Amazon Web Services.
To be candid, many Data Scientist operate in fear wondering what they should be doing as it relates to the business. In my judgment the below questions address both parties with the common goal of a win-win for the organization – helping Data Scientist support their organization as they should and business professionals becoming more informed with each analysis. ... It is important to remember that Data Science techniques are tools that we can use to help make better decisions, with an organization and are not an end in themselves. It is paramount that, when tasked with creating a predictive model, we fully understand the business problem that this model is being constructed to address and ensure that it does address it.
First, get an idea of what is really going on across your organization. Simple hardware and software asset mapping tools show what is attached and being run against your IT platform. There will probably be a few surprises there. For example, a department that didn't authorize spending on an enterprise-scale storage area network might have its own network attached storage box running, purchased outside its IT budget. Expect software compliance issues as well: That NAS box probably runs a copy of MySQL or Microsoft SQL Server, even though the organization's standard for database management is Oracle.
“The impact of the IoT on storage infrastructure is another factor contributing to the increasing demand for more storage capacity, and one that will have to be addressed as this data becomes more prevalent,” according to a Gartner report on the IoT and the datacenter. “The focus today must be on storage capacity, as well as whether or not the business can harvest and use IoT data in a cost-effective manner,” the report continues. ... Most CIOs will deal with the first phase of the Internet of Things by investing in and deploying a platform. Any number of them exist, but the one getting the most buzz right now seems to be Google’s Brillo product, along with the AllJoyn platform from Qualcomm and the platform created by the Industrial Internet Consortium.
Whether it is about shopping, ordering your favourite food, saving money, hiring a cab or any other routine activity online, which devicedo you pick up at an instant to carry all such activities? Your Smartphone, right! Well, it is same with every one of us. Our cellular device has emerged as a real friend in need and is playing a crucial role in simplifying our daily tasks, changing your outlook towards information. It is not at all wrong to say that technology of mobile is growing at the speed of light and the apps have become an integral part of the digital ecosystem. In fact, these apps are progressing to make ubiquitous presence. However, staying up-to-date with the latest trends of mobile app development has become order rather than merely an option.
Quote for the day: "The technologies which have had the most profound effects on human life are usually simple." -- Freeman Dyson
First, you need rules that lay out who can do what with what information, which providers and in which world regions and time zones. Losing control of that is one of the biggest stumbling blocks organizations come across, Cancila said. "You want to be able to position your organization to use cloud services effectively, but you have to retain some level of control so that you meet the requirements of the business," she said. Of course, organizations already use authentication mechanisms like Microsoft Active Directory so users and computers can access systems. If they're using Azure, they can manage users directly in the cloud by having them log in to the cloud version, Azure AD, a separate directory of users that lives in the cloud.
Efficiency is a great place to start and there are still thousands of colocation centers that have PUE values of over 2.0 (depending on regional climate issues a good PUE should be 1.2 – 1.45). This means on average colocation facilities are burning 30% plus more energy than they should be to support their hosted IT equipment. This inefficiency is bad enough, but when you combine it with the dirty energy mix that most of these 3685 data centers run on the story only gets worse. Consider that a data center running at a PUE of 2.0 on natural gas produces less carbon output than a 1.4 PUE data center running on coal generated energy. Combining a high PUE with a dirty energy supply just exacerbates the situation.
Azure Data Lake makes HDInsight, our Apache Hadoop-based service a key part of the Azure Data Lake. As one of the fastest growing services in Azure, HDInsight gives you the breadth of the Hadoop ecosystem in a managed service that’s monitored and supported by Microsoft. Furthering our commitment to productivity, we’ve also updated our Visual Studio Tools for authoring, advanced debugging, and tuning for Hive queries and Storm topologies running in HDInsight. Today, we are announcing the general availability of HDInsight on Linux. We work closely with Hortonworks and Canonical to provide the HDP™ distribution on the Ubuntu Operating System that powers the Linux version of HDInsight in the Data Lake.
The first performance and scalability challenge then is how to keep up with the latest open software progressions, adopting them and getting them to work together. This challenge is where the IBM Open Platform for Apache Hadoop can help. It provides a collection of the latest versions of Hadoop ecosystem components that have been tested, tuned and packaged for easy consumption. This collection also paves the way to exploit even more advanced big data and analytics software tools offered with IBM InfoSphere BigInsights. The next performance and scalability challenge exists below the open software at the physical infrastructure—the full scale-out architecture that represents the compute, networking and storage sprawl.
Here's the kicker: Smith and Shmatikov are moving forward with this research with a grant from Google, the company that helped to put deep learning on the computing map with its research in the first place. In other circles, Google is also known as the company that has played a not-so-small role in making online privacy, or lack thereof, a growing concern. Google bestowed the grant -- the amount of which was not reported -- under its Faculty Research Awards program, which gives one-year awards structured as unrestricted gifts to universities to support research in a range of subjects that might benefit from collaboration with Google, according to Penn State.
Called Unite, the architecture is embodied in the company’s Junos operating system software and encompasses a handful of new and existing Juniper products. They include the EX9200 switch, the Junos Space Network Director management system, and third party products integrated through Juniper’s Open Converged Framework. Unite is intended to enable enterprises to build private clouds and then interconnect them to public cloud infrastructures in a hybrid environment for application access and delivery. At the heart of it is Junos Fusion Enterprise, Junos software designed to provide a single point of network configuration and management for the enterprise network. Junos Fusion Enterprise allows customers to collapse multiple network layers into a single enterprise cloud, Juniper says.
“In IT we’re changing the ways of working from waterfall to agile,” says Shivanandan. Adopting agile has been the first step in turning the ship, and now approximately 70% of Aviva’s IT work is performed in an agile manner. Not all of this is taking place in the digital garage – the transformation is taking place across the entire business. But switching from traditional methods of working, where there is pressure to get it right first time, to an agile approach, where staff are encouraged to“fail fast and learn”, requires time and effort. Agile coaches are being used across the business to train employees in methodologies, standup meetings and ways of working.
Nowadays, it is almost impossible to prevent employees from using social media sites – Facebook, Twitter, LinkedIn, Instagram, Pinterest – while at work. Some businesses are fine with that, even encouraging employees to promote the company and its products or services on social media. At the same time, however, they don’t want productivity to slip, or to have workers portray the company negatively on popular social media channels. So what steps can organizations realistically take to limit or control social media use while at work, without seeming like Big Brother or forbidding its use? Following are five expert tips, along with a sidebar on the legal ramifications of using social media for work or at the office.
To understand why this might be the case, you have to remember that under the current system there are (generally) two stages to a transaction: first you have the assignment of rights and responsibilities (X promises to pay Y $5; this obligation is recorded as a debit in X’s account and a credit in Y’s), then you have the settlement ($5 is actually transferred from X’s account to Y’s). With ACH (the system used to transfer money between bank accounts used by almost all U.S. banks), and the systems that run on it, settlement generally takes at least a couple of days, which means that if fraudulent activity is detected before the actual transfer happens, the transfer of funds can be stopped, and Y never will not get his ill-gotten payment. This would not be possible in a real time system because the transfer of money would occur nearly instantly.
Service-managed keys can give you the assurances of per tenant and per subscription keys, with segregation of duties and auditing, without the headache of managing keys. “But with BYOK, we're requesting customers get involved in significant way,” Plastina says. “That means setting up vaults, managing vaults; in some cases, that requires HSM-backed keys so they’re purchasing an HSM on premise, they have to run their own quorums for administrator’s smart cards and PINs, they have to save smartcards in the right place. It definitely raises the burden on them.”
Quote for the day: "Organizations are most vulnerable when they are at the peak of their success." -- R.T. Lenz
Along with Mr. Nadella, and Mr. Narayen, others seated on the dais were John Chambers, CEO of Cisco and next Chairman of US India Business Council (USIBC) and Google CEO Sunder Pichai. Describing Mr. Modi as “amazing ambassador” of India, Mr. Chambers endorsed Digital India, saying it has the potential to bring about great changes in India. The US and India would be “very strong together under your leadership”, it said. Mr. Chambers said one has to compete against one’s ability to innovate, and not against other companies or countries. “If you can change India, you will change the world,” he said, adding that internet is the second equalizer in life after education.
The architecture of the financial markets is being rapidly re-engineered. And innovative technology is playing an increasingly pivotal role in this process. In fact, according to research by PwC in 2014, 86 per cent of bank CEOs felt that technological advances are poised to have the greatest impact on banking. ... But fintech start-ups face many barriers to entry when trying to introduce new solutions to large financial institutions. Their small size creates challenges around market adoption, delivery and meeting the stringent contractual or compliance expectations of large financial institutions. It’s clear that smart start-ups are a growing source of innovation for the global financial markets industry.
Executive directors are usually selected for their leadership qualities; they often have experience with generalized management or leadership experience rather than narrow expertise or technical acumen. Why should knowledge of IT be an exception? The truth is that many industries today employ outdated technology. Consumer banking is one — layers of technology have been implemented since the 1960s and almost nothing has been taken out. A total overhaul is required. There are countless other examples. Fax machines remain the preferred way to share health care data in most countries despite the fact that the cloud could theoretically allow clinicians to instantaneously share medical records. Chalk remains the technological tool of choice in most education settings.
Negativity around Shadow IT is partly due to the notion that activity is surreptitiously taking place under the IT department’s nose. In a handful of cases this may be true, but it is more likely departments know activity is taking place and lack visibility into how much, by whom and what the results are. A survey of IT executives released by the Cloud Security Allianceearlier this year finds that nearly 72 percent of executives don’t know how many Shadow IT applications are being used within their organization. In fact, only 8 percent of executives say they truly know the scope of Shadow IT at their organizations. For organizations to truly benefit from Shadow IT, there is a tangible and leading role for the IT department to play
Businesses are increasingly reporting that projects are delayed while they wait for a trusted contractor to become available. By outsourcing Project Management function as a Service, you are buying in the solution to your project requirement rather than just the person who will actually fill the vacancy. If the football manager were able to call upon a ‘PMaaS like resource’ he’d ring and ask for a goalkeeper – the role he needed filling – and they would fill the gap with a competent person from the bank of goalkeepers on their books. ... In a PMaaS partnership you either call up and ask for someone who fits your brief, or the more intuitive partner, will have already carried out gap analysis on your operation, predicted your requirement and costed it ahead of the project starting.
Trained robots can reduce costs by up to 50 percent, according to the Institute for Robotic Process Automation. Usually, one can replace between two and five full-time employees. A robot also does the job without misspelling names or numbers; humans, on the other hand, typically make 10 errors during a 100-step process. An UiPAth software robot is at least three times faster than a human - and often even quicker than that. "There are other processes, background automation as we call them, when the robot instantly reads, writes and validates emails, spreadsheets, and PDFs. In such cases, it can be up to 100 times faster than a human," Badita said.
Unveiled in June, the Oculus Touch hand controls make it possible to do things like grasp virtual blocks, push buttons, and shoot a slingshot while using the Rift. The Rift is slated to be released in the first quarter of next year; the controls are set to arrive in the second quarter. As with the Rift, pricing and exact availability for Touch have yet to be announced. At an Oculus developer conference in Los Angeles this week, I set out to figure out how well Oculus Touch works with a range of applications—whether it could really work as an intuitive, simple way to move or throw a digital stapler, play with another person in virtual reality, or make art.
ECS isn't a black box service. It runs on your own EC2 server instances which you can SSH into and manage as you would any other EC2 server. The EC2 servers in your cluster run an ECS agent, which is a simple process which connects from the host into the centralised ECS service. The ECS agent is responsible for registering the host with the ECS service, and handling incoming requests for container deployments or lifecycle events such as requests to start or stop the container. Incidentally, the golang code the for ECS agent is available as open source . When creating new servers, we can either configure the ECS agent instance manually, or use a pre-built AMI which already has it configured.
In the last article we saw about Logging and getting function call stack information. In this article we will see“Debugger Attributes” which controls debugging ability and provide rich experience to the debugging user. AnAttribute is a Tag defined over the elements like Class, Functions, assemblies etc. These tags determine how the elements should behave at run time. Let us see below specified debugging attributes with a simple example: DebuggerBrowsable Attribute; DebuggerDisplay Attribute' and DebuggerHidden Attribute
IOT will drive a new level of awareness, with behavioural prediction, health stats, social presence and similar. "For businesses, the changes will be more extreme. Device manufacturers of all types will be under pressure to make everything smart, and user friendly, and all this while trying to beat their competitors to market. In addition, these devices will all create new data sources, adding to the flood of big data which is already drowning organisations." ... "At the end of the day, harnessing the IOT effectively will mean competitive advantage, and executives need to learn this skill. Possibly the biggest challenge for executives will be the collection and analysis of this data, and then turning this into actionable business insights to gain an advantage."
Quote for the day: "Prosperity belongs to those who learn new things the fastest." -- Paul Zane Pilzer
A particular obstacle is posed by the challenges of laying such an underground network in insurgency-affected states like Arunachal Pradesh, Assam, Manipur, Meghalaya, Mizoram, Nagaland, Sikkim, Indian-administered Kashmir, Chhattisgarh and Jharkhand. A lack of agreement between the central and state governments does not help, and compounding the mix are illiteracy, poverty and a shortage of skilled manpower. The "Digital India" project aims to promote e-education in over 250,000 government schools and e-governance in about 250,000 village councils via internet connections. However, most schools in villages and towns face a severe shortage of qualified computer trainers.
High says it is important to do this because language alone is only part of human communications. “We augment the words with physical gestures to clarify this and that,” High said. “You can bring into the [robot] interface this gesturing, this body language, the eye movement, the subtle cues that we as humans use when we communicate with one another to reinforce our understand of what we’re expressing.” Robot interaction is becoming an important issue as industrial robots start moving into new settings, requiring them to work alongside people, and as companies try to develop robots for use in stores, offices, and even the home.
In the old days, IT would collect requirements, and then translate them into a business requirements document and issue an RFP to source the solution. That process doesn't work anymore. We still have to understand requirements, but now we can break them down into bite-sized pieces and deliver solutions iteratively. This means we can move forward delivering value without having to use multiple business cycles to develop a longterm technology system. Longterm visions for systems these days get very outdated, very quickly. It is all about iterative learning and deployment. We do small projects that are continuous in nature and that build on each other and provide value at every stage.
The choice of whom to colocate with represents the first stage of the journey but it is also often the most important decision that any business will make along the way. In the near term, colocation gives businesses peace of mind, allowing them to participate in the benefits of a hosted datacentre while managing IT upgrades within their budget and timeline. Companies can migrate elements of the operations and management for maximum benefits and minimum risk. For many organisations, however, this will be just one part of an evolutionary process. Where service providers like Pulsant are able to add significant value over and above the traditional colocation provider is in supporting the transition from colocation to cloud.
The NFV MANO model needs to adapt to this new reality. The original architectural concept, developed in 2011, became a roadmap of new technology elements and standards needed for NFV. A lot has changed since 2011 and it’s all likely to change again, and it’s not clear that the MANO model will — or can — adapt quickly enough. Service providers and technology vendors with whom I’ve spoken in recent months say some of the confusion about the future of service-provider NFV stems from the ubiquitous MANO diagram, which can be seen below. The marketing guys got a hold of this slide and promoted it at every NFV conference on earth. It became a sort of Rosetta Stone for NFV.
“What data scientists use today is a combination of statistical and machine learning algorithms to find patterns in the predictive models they use,” he says. “Traditionally, they’ve had to have a strong mathematical foundation. You hear many data scientists saying ‘I did the math.’ That also refers to running statistical machine learning algorithms. “In the future,” he continues, “the tools are going to de-emphasize the mechanics of doing machine learning. So the data scientists are going to be more creative about the types of models they create, freeing them up to have more time for curiosity to discover new things that may be of value.” ... In the future, advances in data science tools will help leverage the existing data science talent to greater effect, Gualtieri says.
What deep learning will allow us to do is to bridge the semantic gap between the fuzzy thing that is the real world, and the symbolic world computer programs operate in. Simply put, machines will soon have much more understanding of the world than they currently do. A few years from now, you’ll take a picture of your friend Sarah eating an ice cream cone, and some machine in the cloud will recognize Sarah in the said picture. It will know that she’s eating ice cream, probably chocolate flavored by the color of it. Facial expression recognition will make it possible to see that she looks excited with a hint of insecurity.
One of the key ways to prevent technical debt is to create awareness about technical debt in development teams. Development teams must know about technical debt, its various dimensions and types, and the impact of debt on their project. They must be well equipped with code quality concepts, clean coding practices, design smells, and ways to refactor them. The level of understanding and awareness about best practices and above-mentioned concepts could be improved by conducting focused trainings, as well as organizing workshops and conferences. Employing relevant processes can help a development team prevent accumulation of technical debt. Typical examples of such processes are review processes and architecture governance.
“We’re likely going to see years of exploration and talking and some preliminary products with companies like Chain, but it’ll be a very narrow use case,” says Silbert. He predicts that in coming years Bitcoin will rise to become a recognized store of value of something like gold, and that innovation in its design and services built on top will see the original, public blockchain become an underpinning for financial services of all kinds. “Eventually Wall Street will come to appreciate that the Bitcoin blockchain is the most secure and most flexible and can solve a lot of the issues that they have,” he says.
Something's happening to the Outlook apps for iOS and Android. Quietly, they’ve begun to take on more and more responsibility—not just email, but calendaring and file information as well. It's no accident. Smartphone apps began life as focused, single-purpose products, but Microsoft's betting we'll want more interconnectivity moving forward. Just as cosmic dust collected into stars and planets, then began rotating about one another, colliding and sometimes gobbling each other up, Microsoft’s mobile apps are being built to interact with each other, share information, work together.
Quote for the day: “Don’t just spot and point out problems – solve them.” -- S. Chris Edmonds
Bot's opinion concerns a rather convoluted case brought before the High Court of Ireland by Austrian citizen Maximillian Schrems. .. He had made the complaint in Ireland because Facebook's European headquarters is there, putting its interactions with citizens of any EU country under Irish data protection law. EU law requires that companies exporting EU citizens' personal data do so only to countries providing a similar level of legal protection for that data. In the case of the U.S., the exchange of personal data is covered by the Safe Harbor Privacy Principles, which the European Commission ruled in July 2000 provide adequate protection.
In general, IT certifications that are increasing salaries include ones related to architecture, security and cloud, including those that require deep systems knowledge, as well as certifications on skills specific to a platform or vendor, Foote said. Even if some of the most in-demand skills begin to see salary rates drop slightly, it may not be a sign that those skills are no longer hot -- it may simply be the supply of workers is catching up with the demand, so the certification payoff isn't as strong. One job that will stay at the top of the hot-skills list: security. That's because members of companies' board of directors are getting personally sued after security breaches, putting security concerns squarely in the C-suite, Foote said.
Google’s Android lock functionality should help relieve at least some of the concerns that IT administrators might have in allowing employees to use Android devices to access and store business applications and data. “One of the main tasks of administrators is to set authorisation levels for employees according to departmental and task requirements. They also have to ensure that security does not limit accessibility,” according to Foecki. “For example, two users on the same device will mean two completely different levels of authorisation for transferring data. This flexibility marries convenience with security.”
Recent survey findings show most IT security professionals believe they don’t have full visibility into where all their organization’s sensitive data truly resides. It’s important to note that cloud data has a three-phase life-cycle. And the journey carries many new risks. Today’s data privacy and compliance practitioners increasingly embrace the idea that safeguards must be in place during all three phases – In-motion; at-rest and, in-use – regardless of where it physically exists (e.g., within the company or in outsourced cloud systems). As many in the nation tune in to the U.S. Open, let's take a look at why so many enterprises are making such a racket (sorry!) about cloud – and the major concepts and considerations they must consider when it comes to gaining visibility into and control over data during its daily journey to, from, and within public cloud environments.
Today, enterprises must grapple with a panoply of numerous and highly sophisticated threats. In response to this dangerous landscape, it is no wonder that businesses are increasingly turning to security dashboards – a powerful communication vehicle for all information security professionals. An effective security dashboard provides personnel, ranging from security analysts to CISOs, with the tools to report on incidents and evaluate security risks. Providers typically offer customers a number of customizable solutions, but this variety begs the question: what features make a security dashboard most effective? We asked industry experts for their tips on what they recommend a powerful dashboard must have.
When trying to first introduce systematic code review organizations often get tripped up on when to insert the review. Should it be pre-push review (before the change has landed in the authoritative repository) or post-push (some time after the change has landed)? Since pre-push review happens before the change is deployed authors are incentivized to craft small changes that can be readily understood (since the change will not be deployed until someone else understands it) and reviewers have a chance to make meaningful suggestions before the code runs in production. However, adding a new -- blocking -- step to the development process is a risky and potentially disruptive change. Post-push review still realizes many of the benefits of review in general and requires no initial changes to anyone's process.
XML data is processed by xmlLex.cpp like all XML in the XMLFoundation it then uses JNI to make instances of Java Objects that come instantiated with all the member variables already assigned from the XML - No code needs to be written to accomplish this - just a little table of information that allows the algorithm to correlate XML Elements to member variables. ... It's like magic from the Java side, the objects just appear in their containers. I make some outrageous claims in this article, and so that none of them be proved false – it needs to be known that JavaXMLFoundation.cpp uses a DOMish approach underneath Java, and therefore although it’s still fast its not going to have the big speed gain of the pure C++ implementation.
While the cloud has its benefits, it’s only as secure as you make it. As recent as last week, over 1.5 million medical records were breached on Amazon Web Services. The names, addresses, and phone numbers, along with biological health information including existing illnesses and current medications, were posted in the clear to Amazon S3 storage servers. These could have just as easily been credit card numbers. It’s also imperative to realize that just because a cloud vendor offers up a PCI certified environment; it does not mean everything you build on top of it will automatically be in compliance with the PCI DSS or PA-DSS requirements.
Companies would perform due diligence on a target and look at all the standard risk factors: tax issues, environmental issues, employee arrangements, intellectual property issues, licenses and permits, debt, and other aspects of financial health among them. But as sensitivity to data and its value has risen over the recent years, a company’s data can become a significant asset, often to the point of being the critical one justifying a deal. But it also can be the reason a deal gets killed. No one wants to invest in a company only to have it hacked due to poor data security and then become the target of a regulatory investigation for unfair and deceptive trade practices due to poor privacy disclosures.
An organizational culture of resilience may be thought of as a climate or general atmosphere within a group, organization, or community which fosters resilience in the wake of adversity. It is an environment is that perceived by the majority of members/ workers as supportive, motivating, and non-punitive. ... IOM notes that in developing resilient leaders, it is especially important to focus on frontline supervisors. Frontline supervisors may be the best medium for not only initiating changes within organizations but also sustaining those changes. Once created, resilient leadership practices serve as the catalyst that inspires others to exhibit resilience and to exceed their own expectations.
Quote for the day: "Leadership is intangible, and therefore no weapon ever designed can replace it." -- Omar N. Bradley
By contrast, the performance and capacity of NFV-defined functions are transformed by the multi-tenant hosting infrastructure (think network server) and the current load from all its applications. The workload on that infrastructure is constantly changing due to the context of the network at any point in time. Yes, NFV can enable flexibility and agility, but it’s hard to monitor exactly what’s going on and thereby proactively manage it. That lack of determinism frustrates traditional means of administering the network, because the need for real-time operations support systems (OSSs) is moving deeper into the network itself.
After all, none of the hardware equipment has any intrinsic value for a company. Instead, the intelligence resides in the data itself and its potential for any business. Residing ‘in the cloud’, virtualised data can be rendered both user accessible and secure. The balancing act is substantially simplified through emerging technology, which reduces the number of physical data copies, thus representing a smaller attack surface, a tighter span of protection and greater control. Reducing the number of physical data copies also eliminates the necessity for continuously adding storage capacity. Copy data virtualisation delivers new frontiers of flexibility and speed to meet business objectives at higher performance levels and lower costs.
“What made it [Spark] game changing is it had cross-platform capability,” Glickman said. “It combined relational, functional, iterative APIs without going through all the boilerplate or all the conversions back and forth to SQL or not. It was storage agnostic, which I think was the key insight Hadoop had been missing, because people were thinking about how to put compute on HDFS" Glickman also saw other advantages of Spark, including that it provides compute elasticity as well as the ability to scale storage and the number of application users. “The power of Spark is in the API abstractions,” said Glickman. “Spark is becoming the lingua franca of big data analytics. We should all embrace this.”
JSR 376 is, of course, the Java Specification Request that aims to define "an approachable yet scalable module system for the Java Platform." But Project Jigsaw actually comprises JSR 376 and four JEPs (JDK Enhancements Proposals), which are sort of like JSRs that allow Oracle to develop small, targeted features for the Java language and virtual machine outside the Java Community Process (JCP). (The JCP requires full JSRs.) JEP 200: The Modular JDK defines a modular structure for the JDK. Reinhold has described it as an "umbrella for all the rest of them." JEP 201: Modular Source Code reorganizes the JDK source code into modules. JEP 220: Modular Run-Time Images restructures the JDK and JRE run-time images to accommodate modules.
An insurance company wanted to investigate the relationship between good or bad habits and the propensity for buying life insurance. When the company realized "habits" was too general, it focused solely on smokers versus non-smokers, but even that didn't work. "In half a year, they closed this project, because they didn't find anything," Sicular said. The failure, in this case, was due to the complexity of the problem. There's a big gray area the insurance company didn't account for: People who smoked and quit, a nuance likely overlooked because, to put it simply, "they're not healthcare professionals," Sicular said.
After decades of research and development, artificial intelligence (AI) is finally becoming a part of daily life. While we may not be fully in the age of AI just yet, there's no denying that it's just around the corner. The evidence is clear in the consumer market with personal assistant apps like Siri and Google Now using AI to provide contextual, relevant information, and anticipate our needs. But, the enterprise is rife with AI as well, in the form of cognitive computing, machine learning, and more. Here are ten enterprise technologies that are setting the stage for the AI era to come.
There are plenty of tools for looking at the data, but each tool and the data exposes is isolated from all of the other tools. And without the ability to correlate data across tiers, it is hard to understand why something is happening. Splunk itself is good for pulling in arbitrary data sources, allowing analysts to correlate data such as real-time sales data with web server traffic and database health. But that alone isn’t enough. The ad hoc queries written by the analysts are, by their nature, non-repeatable. And with the application developers being pulled in multiple directions, it can be hard to find one to build and maintain custom dashboards. Splunk’s new product called IT Service Intelligence (ITSI). Splunk ITSI is designed to allow analysts create their own dashboards.
Traditionally, HR and recruiting have been pressured to solve the problem of the talent war and the skills gap through ever-increasing compensation packages, poaching talent, offshoring and outsourcing, but clearly that hasn't worked to solve the whole problem. This "crowdsourcing" approach gives CIOs a new way to address these talent issues, says Harry West, vice president of services product management for Appirio. "The gig economy doesn't have to be threatening at all. There's a huge opportunity for businesses here, and a large pool of flexible, highly skilled workers almost on-demand. CIOs have the opportunity to tap into a scalable workforce that can help them meet IT needs and reduce costs," says West.
I’ve actually had an opportunity to talk to some individual at some large enterprises where they are worried about things like their ERP system but also how people operate, interoperate with it. I think that there’s probably interesting problems in all of the realms, you don’t have to be a website, public facing, Open API, whatever, in order to have interesting problems to solve. I do think that perhaps running an exchange server yourself in-house these days is possibly better not done. I would say that you’re probably going to add more value to your business by just going and getting one of the numerous cloud solutions that’s available for that. Having people actually help, I don’t know, make the bring-your-own-device solutions work better for your enterprise, or something.
There are a number of out-of-box automation solutions that give you access to predefined production workflows that can help cut out the need for custom development. This helps support the concept that the operations team should be the team that drives the automation. However, the term “predefined” often has a secondary definition of “limitation.” You still have the ability to create the “custom” code needed to remove any limitations you encounter with the predefined workflows. The developer role will still be needed for any kind of DevOps model, but I have a strong belief that you can teach the Dev to Ops, but cannot really teach the Ops to Dev.
Quote for the day: "You think you can win on talent alone? Gentlemen, you don't have enough talent to win on talent alone." -- Herb Brooks
The "functional programming" component of Prajna has to do with F#, the .Net functional programming language. ... "Prajna offers real-time in-memory data analytical capability similar to Spark (but on .Net platform), but offers additional capability to allow programmer to easily build and deploy cloud services, and consume the services in mobile apps, and build distributed application with state (e.g., a distributed in-memory key-value store)," that job posting adds. ... the Microsoft team claims that Prajna is pushing the distributed functional programming model further than Spark does by "enabling multi-cluster distributed programming, running both managed code and unmanaged code, in-memory data sharing across jobs, push data flow, etc."
A physical thing becomes “smart” when it connects to the digital world. The layers 2,3 and 4 allows us to invent and propose to individuals (customers but also citizens) new services (digital services of layer 5). One important fact is that layers 1 through 5 cannot be created independently of each other. That is why the arrows connecting them are bi‐directional in fig 1. An IoT solution with value is usually not the simple addition of layers, but rather, an integration extending into the physical level. How the hardware is built, for instance, is increasingly influenced by the subsequent digital levels and on the other hand, software which compose the digital levels and must be designed to fit the physical levels.
Relationships are as important as the data itself in today's connected world. Use cases that require modeling complex relationships are the best for graph databases. As such, Real Time Recommendation, Fraud Detection, Master Data Management, Social Networks, Network Management, Geolocalized Apps and Routing, Blockchain, Internet of Things, Identity Management, and many others come to mind. ... OrientDB is a distributed graph database where every vertex and edge is a JSON document. In other words, OrientDB is a native multi-model database and marries the connectedness of graphs, the agility of documents, and the familiar SQL dialect.
The funny thing is, vendors actually began drinking their own marketing Kool-aid and think of their MDM, quality, security, and lifecycle management products as data governance tools/solutions. Storage and virtualizations vendors are even starting to grock on to this claiming they govern data. Big data vendors jumped over data management altogether and just call their catalogs, security, and lineage capabilities data governance. ... First, you (vendor or data professional) cannot simply sweep the history of legacy data investments that were limited in results and painful to implement under the MadMen carpet. Own it and address the challenges through technology innovation rather than words.
Development teams want to launch features fast and frequently while IT Ops wants to maintain infrastructure stability and availability – which means as little changes as possible. Customers want both. ... Developers are often isolated from the rest of IT in larger organizations. Even though they’re part of the same department, lack of collaboration can impact how teams work without so much of a reason as people sit in different parts of the building or don’t talk at lunch. However, they often need to work together. Not only should developers assign resources for escalation, but also they should also support the SLA with the customer. The SLA makes them accountable for impact to business productivity, aligning them with IT.
The Toyota Kata refers to this as establishing “strategic direction.” To Stephen Bungay, though, it is strategic intent and Hoshin Kanri calls it strategy deployment. Once you have a strategic direction, you can find out where the gaps are, then establish immediate steps, called a target condition and move toward that goal. ... One way to conduct a gap analysis is to look at the entire process: Build, Test, Fix, Deploy-Coordinate, Deploy/Do -- looking at how long each step takes if done ideally, and the various ways that step breaks down. Eventually, you'll find a bottleneck: a step that’s holding back improvement the most. Sometimes, the what-to-fix isn't the bottleneck, but the easiest to improve right now.
Nano Server is a Windows OS created for the cloud age. It has been announced by Microsoft this April and is going to be shipped with Windows Server 2016. What makes Nano Server special? A very small disk footprint compared to traditional Windows Server deployments (a few hundred MB instead of multiple GB); A very limited attack surface; A very limited number of components, which means fewer updates and fewer reboots; and Much faster virtual and bare-metal deployment times due to the reduced footprint. ... In short, the OS has been stripped from everything that is not needed in a cloud environment, in particular the GUI stack, the x86 subsystem (WOW64), MSI installer support and unnecessary API.
In Scrumban, teams can still employ the same estimation techniques, but they can enhance their understanding of the work in the context of their historical performance. Delivery time -- the amount of time it takes for work to be completed once it has begun -- can be graphically plotted to reflect the team’s distribution pattern. These kinds of additional views into the team’s work provides many advantages. From a team standpoint, they begin to better understand the degree of variability in their historical deliveries. They can explore whether or not some of that variability can be correlated to other factors, and manage their estimation and Sprint planning process from a position of superior understanding of their historical performance.
Success in breaking down barriers doesn't come from just talking the talk. Conophy is strategic when pairing up a member of his team with a business colleague, selecting someone who understands business basics such as how the company makes money and is patient enough to sit in a room with the business and field question after question. "It's a different dynamic," Conophy said. "And that also means your people have to be articulate, understand the technology ecosystem and, at the same time, understand the business to be effective in that room." He's also introduced a two-speed IT model by unshackling "those at the sharp end of the sharp end" -- potential digital disruptors -- from traditional IT functions so that they can experiment, innovate and "go after the likes of a different kind of competitor," he said.
"Think about when Luke Skywalker loses his hand," said Melroy. "He gets a new one and it can feel. It's no different. He can continue to function in all the ways he was used to. The ability to control that new hand with your brain and have seamless sensing in real life? Absolutely, that is coming. That is five to 10 years away." To make that work, Melroy said, we'll need to be able to communicate with our smart devices without typing on a keyboard or using a mouse. Even spoken commands would be too awkward. We'll need to communicate with our assistants or devices with our thoughts. According to several researchers, such an advance is not far away.
Quote for the day: "Every time you share your vision, you strengthen your own subconscious belief that you can achieve it." -- Jack Canfield
“The most important thing is to have a clear view of your strategic objectives,” says Sengupta. “Are you doing things for efficiency or are you doing things for growth?” Having that clarity makes is easier to identify appropriate metrics for digital efforts. In addition, digital transformation project teams must be cross-functional and accountable to a set of common outcomes. “Most organizations get the first part right, but in the end it is human nature to steer towards their own individual incentive structures,” Sengupta says. “Driving alignment is important at every level—from the strategic down to the individual.” IT must also collaborate with the business to create an end-to-end vision for any digital improvement, including the business process and organizational changes required for them to deliver business value.
The first step for container orchestration is choosing the right tool ... The second best practice for container orchestration is to spend time on your application architecture. Many organizations rush through container-based application development, especially since orchestration tools remove some of the underlying complexity. But it pays to think carefully about how to divide up the application within the containers that the orchestration tool will manage. ... Finally, test and properly operationalize the container orchestrations. At the end of the day, you have to provide users with something that functions correctly and provides nearly 100% uptime. Perform component and regression tests, performance tests and penetration tests for security reasons.
Apache Drill enables data analysts to explore the data without having to ask IT counterparts to define schemas or create new ETL processes. As analysts delve into the data, Apache Drill’s engine discovers the source schemas and automatically adjusts query plans. Querying self-describing data and being able to process complex data types as you go, provides an entirely new way of wringing every possible useful bit and byte of business intelligence from big data. Data sources such as Hadoop, HBase, and MongoDB can be queried using ANSI SQL semantics to glean new insights at the speed of thought. Actionable insight comes from seeing the correlations across multiple, apparently unrelated data sources, including blog posts, sensors, clickstreams, customer interaction records, videos, transaction data, competitive analysis, and much more.
The center of data gravity is moving with more apps being delivered via cloud Software as a Service (SaaS). In the past, I might only have to extract Salesforce data with other on-premises app data into a client’s on-premises data warehouse. Today there is a constantly growing list of popular cloud app data sources that analytics pros need to include in decision-making processes. If you neglect the ocean of cloud and IoT data sources that your opponents do include in their analytics, you will lose your competitive edge and may miss a key window of opportunity in the hyper-competitive global economy. Don’t believe me? Here is a competitive reality check. In 2014, 89% of 1955 Fortune 500 firms vanished. Steven Denning pointed out in Forbes that “fifty years ago, the life expectancy of a firm in the Fortune 500 was around 75 years. Today, it’s less than 15 years and declining all the time.”
"It's very important when you have an established organization to give room for innovation, and you usually can't do that within the boundaries of an established organization," he says. "So whatever you call it, you have to have it within another unit. You need teams focused on a new innovative piece." He says the move is paying off for the company, which has embarked on a digital transformation that has used technologies, such as the Internet of Things and mobile platforms, to make its equipment smarter, its workforce more efficient, and the company better connected and more responsive to customers. "It's really allowing us to have a faster, more risk-taking approach. You have the start-up mentality," he adds.
Enterprises looking to gain a competitive advantage with advanced analytics may want to take a look at their own corporate cultures before they invest big bucks in new tools and build out a workforce of data scientists. Lack of investment isn't one of the top reasons predictive analytics and prescriptive analytics programs fail. Rather, they fail because they lack buy-in from users and other stakeholders. That's according to Lisa Kart, research analyst at Gartner, who will present her best practices for advanced analytics projects during the Gartner Business Intelligence & Analytics Summit 2015 in Munich next month. The summit marks the first event in a series that will travel the world over the next 12 months, landing in the Dallas area in March 2016.
We need to stop being so application agnostic close to the app and begin shifting that all left, toward dev and ops and a software model that scales both economically and architecturally. We need a generic, corporate security infrastructure at the traditional edge of the network and a specific, per-application security architecture at the new perimeter: the application. ... Consider the coming tsunami of applications generated by the Internet of things and adoption of microservices architectures. If every “new” technology generally results in a 10x increase in applications, then how many applications will two, simultaneous “new” technologies generate? How many new security policies will be required at the edge of the network to support each and every one of those applications?
"IT is not just an enabler of certain processes but part of the delivery of every product and service we offer," Watkins said. Indeed, the company itself was undergoing a transformation, Watkins said. KAR no longer wanted to be a car auction company that uses technology but "a technology company that sells cars," he said. IT had not kept up with the vision. "With the convergence of these technologies, business demand skyrocketed and created a wide gap between business expectations and IT delivery. Something had to switch," Watkins said. ... "We need our staff to be agents of change. The status quo doesn't get it done. We have to look at things differently. We have to be problem solvers. We have to bridge siloes between IT and operations, between one IT team and another IT team, and between being a technology provider and being a service organization," he said.
IT outsourcing customers should take this opportunity to review the amount of control they retain over employees of their IT service providers. “Since this is the first case, it is difficult to determine how much control is too much—and will qualify an outsourcer as a joint employer,” said Van Noose, “and these companies should consult an attorney to assist in further evaluating whether they retain an a degree of control that would amount to a ‘joint employer’ under this decision.” “This decision involves potentially requiring an outsourcing customer to get involved in responding to employees of the outsourcer in ways that are different from the ways that typical outsourcing contracts indemnify against,” says Edward J. Hansen, partner in the business technology and complex sourcing practice at McCarter & English.
Quote for the day: "Leaders are more powerful role models when they learn than when they teach." -- Rosabeth Moss Kantor
PowerShell is pretty close to a full-blown programming language in expressive power and the breadth and depth of its lexicon and syntax. Task-oriented code items in PowerShell are called "cmdlets" and there is an amazing variety of pre-fabricated cmdlets available from Microsoft (and other parties) for all kinds of administrative tasks, for everything from Windows configuration and installation, to file and print management, policy management, virtual machine management, and lots, lots, lots more. PowerShell has been around long enough that it's now in its 5th major version, as the output of this PowerShell variable ($PSVersionTable.PSVersion) illustrates:
Faster, easier purchases without having to take your wallet out of your pocket, connect it to the payment terminal, type in your pin number, make the payment then take it out, put it back in your wallet and back in your pocket. Just tap and pay, 1 second transaction. Another major benefit comes from pre-implemented “loyalty programs”, right into Android Pay and this feature alone is going to revolutionise shopping. Finally, free, instant, person-to-person payments. ... Android Pay can be used with all NFC enabled Android devices, on any mobile carrier, with every “tap and pay ready” location across the US, to start with. At this point, Android Pay supports credit and debit cards from Visa, MasterCard, American Express and Discover, with worldwide banks enrolling day by day.
Most businesses do not understand how to nuance when it comes to predictive accuracy; however, it will be essential for a Data Scientist to help the organization move beyond the simple notion of accuracy. Obviously we all want to hit the proverbial target. At least directionally, as a Data Scientist, you will want to steer the conversation to something more useful, like an algorithm that produces “high accuracy/low precision” or “high accuracy/high precision”. It usually proves beneficial to the business audience to distinguish what is meant by accuracy and precision as they appear to be close in meaning. Help them see that “accuracy” refers to the closeness of a predicted value to the actual value.
In this white paper, IDC describes lessons learned from interviews and surveys of organizations engaged in Big Data initiatives and the patterns of adoption they have followed to expand existing or initiate new Big Data projects to create value for their organizations. The document highlights the importance of the Big Data architecture to drive improvements and innovation in customer interactions, operational efficiency, and compliance and risk management, among a wide range of business goals and desired outcomes. This white paper utilizes previously published IDC research frameworks such as the IDC Big Data and Analytics Opportunity Matrix and the IDC Big Data and Analytics MaturityScape. Finally, this white paper highlights Oracle Corp.'s Big Data architecture, technology, and services, as well as Oracle customer examples utilizing these offerings.
In recent years there have been new waves of malware designed to encrypt the user’s information, enabling cybercriminals to demand a ransom payment that will allow the user to decrypt the files, and these are detected by ESET security solutions as filecoders. In 2013, we learned about the importance of CryptoLocker due to the number of infections that occurred in various countries. Its main characteristics include encryption through 2048-bit RSA public key algorithms, the fact that it targets only certain types of file extensions, and the use of C&C communications through the anonymous Tor network. Almost simultaneously, CryptoWall made its appearance and succeeded in outdoing its predecessor in terms of the number of infections, partly due to the attack vectors employed
Machine learning lets a computer continually adapt itself to your inputs so it can keep improving its results. Another excellent example of this is found in Apple’s new iPhone operating system. Engineered with what Apple bills as more “proactive” intelligence, iOS 9 pushes apps that you often use in certain situations to your lock screen for easy access. So if you tend to listen to podcasts on your commute to work, it might suggest you open Stitcher every morning around the time you leave home. ... “We’re at the early stages of applying machine learning to productivity,” says Tim Porter, founder of Gluru, a startup building a smart personal assistant for people’s daily workflow.
AOP modularity means method does call crosscutting object instead crosscutting class methods are expressed in such a way that it calls itself wherever it is required. I am going to explain how it works. I am not going to deal with setting up environment or with detailed use of AOP. My main objective of this tip is to tell Spring developers who have not used it before about AOP. AOP seems complicated but it is quite easy to use and provides a very powerful feature. This tip shows how with the use of some simple keywords we can achieve AOP. I am not deep diving into setting up environment. Once a Spring developer can understand the simplicity of AOP, then setting up an environment would not be a tough task.
Agile methodologies long ago proved their efficiency with small co-located teams, hitting home with the flexibility and velocity that come naturally to such teams. But when it comes to moving past team level to organizational scale, Agile practices are up against enterprise development realities like distributed teams, multi-component projects and traditional resource management. As a matter of fact, to adopt Agile practices, specifically Scrum, no organization is too big, complex or distributed. Scrum practices scale perfectly well to fit complex enterprises of more than 100 people, provided due attention is paid to organize the transition process. Here are four rules to follow when implementing Agile at the multi-team enterprise level.
When data is inside your four walls, so to speak, you put trust in your own employees, the infrastructure and security solutions that you select, and the policies that you create to secure it. But as information moves to the cloud, data physically resides in infrastructures owned and managed by another entity – and that trust goes into someone else’s hands, infrastructure and security policy decisions. That is, unless you and your SaaS provider take a new approach. Recent mega-breaches (think: Anthem, Sony) have proven that hackers are after one thing: data. By using encryption, SaaS providers can render sensitive data unusable to hackers. However, encryption alone is not enough. Access controls and key management can also prove to be weak points in a SaaS provider’s defenses.
Call it the “Principle of Great Expectations”” the greater the hype or estimated market size, the higher the likelihood of a rapid proliferation of products that are “tragically pathetic.” Products are often developed simply because certain technologies have become available, without a lot of thought given to why users need them or how to make them delightful to use. Take, for example, one of today’s most successful product categories: the tablet. The first product in this category, the GRiDPad, was introduced in September 1989. It was followed by other unsuccessful attempts to crack the tablet market, including the Apple Newton, in 1993, and the enterprise-oriented Microsoft Tablet PC, in 2002. It wasn’t until 2010, when Apple introduced the iPad, that the tablet became a successful mainstream product, appealing to both consumers and business users.
Quote for the day: "If everyone has to think outside the box, maybe it is the box that needs fixing." -- Malcolm Gladwell
The idea is that in the normal retrospective, there are lots of exercises that we do and one of my favorites is called the Timeline. I am sure you have seen it. You use cards or stickies of different colors and you begin. If it is a reiteration retrospective, you begin with the beginning of the iteration and you put the date and then the end date and then across the time line you put the stickies or the cards that reflect events and they are of different colors. Then you reflect back and you use that to drive actions that you are going to take at the end of the retrospective because, unfortunately, most of us, not just old people like me, cannot remember what happened. So it is an exercise to help you remember what happened.
Obviously, the mass market understands little of how the wearable technology works, what’s inside those little gadgets and frankly, why should they care? But for us, the ones with a vision, the ones who see a layer or two deeper inside these devices, for those of us looking for new markets, new business ideas, the question remains: Is there anything beyond a Fitbit bracelet or the latest Apple Watch? We’ve listened to Laurenti de’ Medici, our CEO, speaking at Digital Catapult a couple of weeks ago about the constant changes in wearable tech landscape, about “wearables” as we know them nowadays – fitness trackers, NFC rings, smartwatches – slowly morphing into embedded, ingestibles, implantables and smart sensors; logical changes leading to disruption, leading to new trends, new markets, new jobs, new industries such as fashion tech and digital health.
M can do those things because the software hands off things it can’t do to human operators known as “trainers.” Sometimes a trainer has to do all the work, but M is also capable of digesting queries it recognizes but can’t handle into easy-to-process summaries that make a trainer’s work more efficient. Right now this model is not efficient enough for M to be more than just an experiment, because it requires too many human workers. But Alex Lebrun, who leads the team working on Facebook’s assistant, says that it can become a real product because the work of the human trainers is gradually teaching the software how to do a greater share of the work. LeBrun and his team joined Facebook when the social network acquired the startup he cofounded,
"The market for the kind of IT skills you need to build payment systems seems to be pretty hot right now because you've got four big institutions and then some smaller ones like us dipping into that pool," he told the House of Representatives economics committee on Friday. A Greythorn study from last year predicted that Australia would head into a "huge" skills shortage within the next five years. The survey said Australia was at risk of losing its IT professionals to the overseas market. However, the most recent Skills Shortages Australia report by the Australian Department of Employment said there was no skills shortage in the ICT sector in Australia. "Demand for ICT professionals is subdued and employers have little difficulty recruiting workers who meet their skill level expectations,"
If the focus of the problem to be solved is internal and the state is existing, then the defined monetization opportunity is business optimization. When using data for business optimization, the value generation and recognition is not defined by revenue dollars or asset assessments for accounting ledgers. The monetized value of data in business optimization is defined by reducing costs or improving productivity in business operations. While the value of business optimization can certainly be defined in monetary terms, the value can also be recognized in soft terms such as increased employee satisfaction, reduced time and effort, or increased accuracy and quality all of which have significant value for the overall business.
Although most of us will never be tasked with goals of such scope, many of us have to manage projects in one way or another. The Project Management Institute estimates there will be more than 15 million new project manager positions added to the global job market by 2020—and many of the rest of us will still have smaller projects to manage on our own. Project Management, simplified, is the organization and strategic execution of everything that needs to get done to tackle a finite goal—on time and within budget. Whether developing new software, carrying out a marketing campaign, or landing a human on Mars, project management is what gets you to your goal.
“We’re trying to create a bit of diversity away from the big whoops and high fives,” Loftis says. She adds that while Salesforce tends to be lauded by customers, the notion that deploying services is simple is wrong: companies need to put in a fair amount of effort to get the best return on investment possible. “Users don’t necessarily feel they’re getting a lot of value for the data they’re being asked to put into the system. On ROI, it’s not that people feel they’re not getting enough but people say ‘this takes more investment than we thought it would’. It’s not just licensing and implementation; adoption and the change management perspective need to be factored in. There’s a misconception that Salesforce ‘just works’.”
Many machine learning solutions have already been developed, and they are continually being improved. I spent some time at Microsoft Research doing some early work in Bayesian reasoning and machine learning. We built a solution for traffic modeling that was spun out as Microsoft Research’s first startup company, called INRIX, which now provides real-time and predicted traffic information around the world. I see three tiers of commercial engagement with these types of technologies. For one group of companies, such as Google, Amazon, Facebook, Microsoft, and Apple, these technologies are strategic, and their investment is a hundredfold or more than it would be from a more conventional business.
These attacks may go undetected and this “noisy traffic” can significantly slow legitimate traffic or cause network outages. With legacy systems, mitigation requires labor-intensive manual intervention because there’s no automated method to handle the threat. If and when network security solutions do sense a NetFlow-based volumetric attack with an application component, manual mitigation can take 15 to 20 minutes. By the time the security team has developed a strategy, the attackers have likely morphed to new signatures.
The datasets themselves also tend to be born in the cloud. As I said, the types of applications that we're building typically focus on sales and marketing and social, and e-commerce related data, all of which are very, very popular, cloud-based data sources. And you can imagine they're growing like crazy. We see a leaning in our customer base of integrating some on-premise information, typically from their legacy systems, and then marrying that up with the Salesforce, or the market data or social information that they want to integrate and build a full view of their customers -- or a full exposure of what their own applications are doing.
Quote for the day: "There are only two kinds of [programming] languages: the ones people complain about and the ones nobody uses." -- Bjarne Stroustrup