July 25, 2016

More Than Half The World Is Still Offline

While more than four out of five people in developed countries use the internet, just over 40 percent of those in developing countries have access. In the ITU’s “least developed countries” -- places like Haiti, Yemen, Myanmar and Ethiopia -- just 15.2 percent of the people are online. ... Also, fewer women than men are on the internet, and that difference is getting worse. The worldwide difference between internet user penetration for males and females is 12.2 percent, up from 11.0 percent in 2013, the ITU says. It’s shrunk significantly in developed countries, from 5.8 percent to just 2.8 percent, but grown in poorer places. Cost makes it harder to get online in some countries. The ITU says entry-level internet access has become affordable in many developing countries since 2011 but remains unaffordable in most of the poorest countries.


Short-term programs, not four-year degrees, are the future of tech education

It takes more than just technical skills to succeed in a coding career. A big part of a career in the programming field is troubleshooting and responding to problems that arise day-to-day. In order to do this successfully, it is vital to be an inquisitive, intelligent learner who likes working through challenges. Additionally, while some may think of programming as solo work, it is quite often done in a team environment. Being able to communicate clearly and work together cannot be underestimated in these roles ... A three-month program like those offered at our schools offers a different type of learning environment. We are able to focus on the key coursework that will help students get in-demand jobs, and our student outcomes back this up.


Ransomware Predictions | Past, Present, Future

A criminal may not need to target an entire enterprise’s set of hosts for maximum return potential. Targeting a few critical assets and preventing restoration ahead of time may be all that is needed to extract a higher ransom amount from some organizations. Think of print servers sitting in a massive warehouse distribution operation. Many of these print servers are still running Windows XP – oftentimes because they are so critical to the operation that they literally cannot be replaced or upgraded. How much money would such an operation pay to get those servers back online? Answer: $1 less than the hundreds of thousands of dollars per day in operations they support. And if it’s a perishable food distribution operation, even more.


EY Report : Blockchain Technology to Reach Critical Mass in the next 3 to 5 Years

A considerable progress has already been made in the embedded health and digital rights management segments. There are already few platforms offering these services. The success of these platforms combined with further development of blockchain-based applications will pave the way for large-scale adoption. The real estate sector is also increasing exploring the use of digital currency technology for managing property records and also as a pooled investments platform where a large number of people can make small investments into projects. According to the EY report, the large scale implementation of blockchain technology will take at least 3 to 5 years. Those who are prepared to invest, experiment and adapt to the technology by that time are expected to benefit when the shift happens.


The world turned upside down: Conventional IT is rapidly becoming shadow IT

The answer is pretty thin gruel. One of IT's remaining tasks is to architect and manage the company’s networks. This is a strategic responsibility but one that’s largely taken for granted. Another task that still falls to IT is the management of the company’s data center. If the data center is used to host revenue-generating systems, this is also a strategic responsibility, but if it’s just housing internal systems then it’s not that big of a deal. A third responsibility that IT continues to handle at many companies is maintainence of internal email systems. This is a highly visible role, but one that is likely to wane in importance as most email systems migrate to the cloud.


7 Common Data Science Mistakes and How to Avoid Them

Some data scientists feel that, to have built a successful machine learning model, is having achieved the maximum level of success. Having built a right model is just half the battle won and it is necessary to ensure that the predictive power of the model is maintained. Many data scientists often forget or tend to ignore the fact that it is necessary to re-validating their models at set intervals. A common mistake that some data scientists often make – is thinking that the predictive model is just ideal since it fits the observational data. Predictive power of the built model can disappear instantaneously based on how often the modelled relationships keep changing. To avoid this, the best practice for any data scientist is to ensure that they score their data models with new data every hour, every day or every month based on how fast the relationships in the model change.


Mobile Payments: Risks Versus Opportunities

One noteworthy example of this phenomenon right now involves mobile payments. Specifically, we know that many technology professionals are extremely leery of mobile payments. ISACA’s 2015 Mobile Payment Security Study found only 23 percent of IT and security professionals believe mobile payments will keep information safe—which, let’s face it, is not exactly a vote of confidence.  It bears asking, though, how that compares to the alternative. Meaning, are there risks to mobile payment scenarios? Sure. Show me a technology without some risk and I’ll show you a technology that’s completely valueless. But even if there is risk, what is the opportunity cost? What do we miss out on by waiting for some future scenario that is even more locked down?


Adapting your board to the digital age

To serve as effective thought partners, boards must move beyond an arms-length relationship with digital issues (exhibit). Board members need better knowledge about the technology environment, its potential impact on different parts of the company and its value chain, and thus about how digital can undermine existing strategies and stimulate the need for new ones. They also need faster, more effective ways to engage the organization and operate as a governing body and, critically, new means of attracting digital talent. Indeed, some CEOs and board members we know argue that the far-reaching nature of today’s digital disruptions—which can necessitate long-term business-model changes with large, short-term costs—means boards must view themselves as the ultimate catalysts for digital transformation efforts.


Ransomware protection -- what you may be missing

As the saying goes, sometimes you can't see the forest for the trees. We are so used to seeing the top 10 prevention techniques, we sometimes miss the lesser discussed approaches. These are important, because the purveyors of ransomware read the same articles with the common approaches, and can use these as a road map to improve their techniques. One of my customers is a large healthcare institution, and one of my major focuses with them has been to take a deep look at approaches to ransomware prevention and recovery. In the process, I have found many things that organizations can do that are not often discussed in the trade press. Since we in the business world need all the help we can get at this point, these can be very important. Consider a few of these


The Technical Skills You Need to Have as a Software Developer

Many beginning programmers try to hedge their bets by learning several programming languages at once or before they try to take on their first job as a software developer. While I think that you should eventually learn more than one programming language, I would advise against doing it upfront because it will just lead to confusion, and it will divert your energies from many of the other technical skills you are going to need to learn. Instead, I’d advise you to go deep and focus on learning the ins and outs of a single programming language, so you can feel really confident in your ability to write code in that language. Remember how we talked about being as specific as possible when deciding what kind of software developer you were going to become?



Quote for the day:

"Leadership consists of nothing but taking responsibility for everything that goes wrong and giving your subordinates credit for everything that goes well." -- Dwight D. Eisenhower

July 24, 2016

Tech giants silent on new Russian surveillance law

"The companies for whom this is a real problem are the Russian telecom providers," she added, who face huge data retention mandates quite separate from the encryption requirements. "They have said [the law] will cost them trillions of roubles." One foreign company, Panama-based NordVPN, is "doubling down" on it's commitment to privacy and anonymity in Russia, according to Jodi Myers, the company's head of public relations and marketing. "Our aim is to make this simple, for the less technical user," she said. But she added the firm was taking steps to "double encrypt" traffic from its Russian users. "We do not have the key [to unlock their users' encrypted internet traffic] and we do not store any customer data on our servers — not in Russia, not anywhere."


The Insider Threat: Are You at Risk?

Shadow IT happens when someone in a line of business pulls out a credit card and signs up for an app without going through the IT department. If you don’t know an app exists, you can’t make sure the right people have access to it or that appropriate access controls are put in place to protect the information stored there. You also can’t guarantee that the disgruntled employee you just fired had access revoked. Shadow IT is hard to spot because you don’t know what you don’t know. However, if things are tense with the lines of business you support, chances are good they are resorting to shadow IT. When the IT department is forced to say no to line-of-business requests for easier access, well-meaning employees, who just want to get their work done, find their own solutions.


What is a Modern Business Intelligence Platform?

Modern Business Intelligence platforms offer end-to-end capabilities, enabling users to take advantage of self-service to answer questions. Gartner defined modern BI in their most recent Magic Quadrant report, saying: “The evolution and sophistication of the self-service data preparation and data discovery capabilities available in the market has shifted the focus of buyers in the BI and analytics platform market — toward easy-to-use tools that support a full range of analytic workflow capabilities and do not require significant involvement from IT to predefine data models upfront as a prerequisite to analysis.” Datameer’s CEO builds upon these ideas in this video for Big Data & Brews, explaining that forward-thinking enterprises are moving past IT-led BI and analytics solutions for offerings that can be managed autonomously by the end-user.


Best practices for managing the security of BYOD smartphones and tablets

Attempts to foist strict controls on how employees use devices can backfire, causing staff to use workarounds that expose the company to even more risk. When setting security policies for BYOD phones and tablets, consult those employees who will be subject to them. Gartner gives the example of forcing users to input a complex passcode every time they want to use the device. "Once users experience this, they quickly become annoyed with IT, due to the extreme inconvenience of making it difficult to text/email while on the move," the report states. A good compromise in this example would be a simple four-digit numeric passcode to unlock the device, with a more complex passcode for accessing corporate data, suggests Gartner.


Container Best Practices

Container technology is a popular packaging method for developers and system administrators to build, ship and run distributed applications. Production use of image-based container technology requires a disciplined approach to development. This document provides guidance and recommendations for creating and managing images to control application lifecycle. ... As you begin to contemplate the containerization of your application, there are number of factors that should be considered prior to authoring a Dockerfile. You will want to plan out everything from how to start the application, to network considerations, to making sure your image is architected in a way that can run in multiple environments like Atomic Host or OpenShift.


Auto Industry Publishes Its First Set Of Cybersecurity Best Practices

The Auto-ISAC provides a mechanism for its members to share vulnerability information, conduct analysis and develop solutions that are beneficial to both the industry and its customers. Approximately a third of the vehicles on the road today in the U.S. include some connectivity that has the potential to provide a pathway into vehicle control systems. So far none of the publicly demonstrated remote takeovers on systems like Chrysler’s UConnect or GM’s OnStar have been easy to implement and only one vehicle at a time can be attacked. By the mid-2020s, virtually all new vehicles will have data connections. As we add more driver assist and automation features, the potential for a bad actor to target the transportation system and either steal data, strand vehicles or send them crashing into each other will be vastly larger.


4 security best practices to learn from the FDIC's data breaches

Apparently, departing employees accidentally grabbed financial information from FDIC loan applicants while transferring their personal data to USB keys. Davidson quotes Representative Don Beyer, ranking Democrat on the House Science, Space and Technology oversight subcommittee, talking to Lawrence Gross, FDIC's chief information and chief privacy officer: "I have a hard time understanding how you can inadvertently download ten thousand customer records." Davidson continues, "Ten thousand was the low end. One case involved forty-nine thousand records. Gross's contention that the former employees 'were not computer proficient' only made matters worse."


How to Deal with COTS Products in a DevOps World

The primary objective of DevOps is to increase the speed of delivery at reliable quality. To achieve this, good configuration management is crucial as the level of control at higher speed of delivery becomes more and more important (while riding a bike you might take your hands off the handle bar once in a while, but a formula one driver is practically glued to the steering wheel). Yet commercial-off-the-shelf (COTS) products often don’t provide any obvious ways to manage them like you manage your custom software. This is a real challenge for large organisations who deal with a mixed technology landscape. In this article I will explore ways to apply modern DevOps practices when dealing with COTS products.


Facial biometric authentication on your connected devices

The purpose of this post is to clarify the understanding of facial recognition as well as trying to guide you to understand how to build these programming frameworks and host them that can be used to deliver the same feature across your devices. Now you can of course build the system on one of your hardware device or one of the mobile phone but what if you have to connected multiple devices and perform the same actions on all of those devices? In such cases, adding a simple program to each one of them an then maintaining them won't be a good idea. That is why, in this guide I will show you how to build a server too. The server would be able to handle the requests, process the data being sent and generate the responses.


Digital Disruption for Enterprise Architecture

Jeanne says one thing is becoming increasingly clear–enterprises will not be successful if they are not architected to execute their firm’s business strategies. At the very same time, she has found with the companies (existing successful enterprises) that she talks to believe their success is not guaranteed in the digital economy. ... Digital strategies were forcing companies around a rallying point but surprisingly there was not much distinction behind the rallying point more than, “I want to be the Amazon or Uber of my industry”. But Jeanne claims this is okay because competitive advantage is not going to be about strategy but instead about execution. And being the best at execution is going to eventually take you in a different direction than other market participants.



Quote for the day:


"There is no decision that we can make that doesn't come with some sort of balance or sacrifice." --@SimonSinek


July 23, 2016

Training, Awareness Keys to Battling Social Engineering

Social engineering is especially dangerous for employees who may have special access to valuable assets that other employees may not, such as the ability to wire funds. A good example of this occurred last year when Ubiquiti Networks Inc., a US-based manufacturer of high-performance networking technology for service providers and enterprises, was taken for US $39 million. An employee of a Ubiquiti subsidiary was the victim of a CEO scam, which hijacks or impersonates the email of a senior executive within an organization. In this case the victim, who had authority to initiate wire transfers, transferred large amounts of money from company accounts to the criminal’s accounts. Adversaries are cognizant of the basic human tendency to trust people on face value, and accordingly, they abuse that trust to perform social engineering attacks. 


User experience and the IoT: tech should be all about humans

Historically, IoT solutions have not considered human beings in their equations and strategy roll out; which has proven to be a challenge, mainly because their solutions never came into contact with people, except through data dashboard and notification systems. Today, however, we are seeing products in the hands of people that are IoT dependent, but the consumer does not even understand the IoT is being used. In most cases, the consumer has no idea who or what IoT is. A great example is that people see Uber as a mobile app that calls a taxi — they are not running around talking about a great IoT app that they just downloaded. What Uber correctly achieved was to design a service that uses IoT concepts to provide a valuable service to people. Today, those people know Uber, not IoT. Without IoT though, Uber would not be possible.


Digital disruptor: now keywords in enterprise architects' job descriptions

A digital enterprise is one that takes advantage of a constellation of technology platforms and strategies -- including cloud, mobile, social, data analytics and Internet of Things. ...  the famous startups that are creating so much pain within established markets -- you know, the Ubers and Airbnbs -- do one thing really well. More established enterprises are capable of doing multiple things well. The key is doing all those things well, in an integrated fashion -- something only established companies are in a position to do. "Competitive advantage will come from taking capabilities that others may or may not have and integrating them in ways that make something extraordinarily powerful," Ross is quoted as saying. "Integrating business capabilities provides a whole value proposition that is hard for others to copy."


How to Improve Machine Learning: Tricks and Tips for Feature Engineering

Predictive modeling is a formula that transforms a list of input fields or variables into some output of interest. Feature engineering is simply a thoughtful creation of new input fields from existing input fields, either in an automated fashion or manually, with valuable inputs from domain expertise, logical reasoning, or intuition. The new input fields could result in better inferences and insights from data and exponentially increase the performance of predictive models. Feature engineering is one of the most important parts of the data preparation process, where deriving new and meaningful variables takes place. Feature engineering enhances and enriches the ingredients needed for creating a robust model. Many times, it is the key differentiator between an average and a good model.


Snowden Designs a Device to Warn if Your iPhone’s Radios Are Snitching

Huang’s and Snowden’s solution to that radio-snitching problem is to build a modification for the iPhone 6 that they describe as an “introspection engine.” Their add-on would appear to be little more than an external battery case with a small mono-color screen. But it would function as a kind of miniature, form-fitting oscilloscope: Tiny probe wires from that external device would snake into the iPhone’s innards through its SIM-card slot to attach to test points on the phone’s circuit board. (The SIM card itself would be moved to the case to offer that entry point.) Those wires would read the electrical signals to the two antennas in the phone that are used by its radios, including GPS, Bluetooth, Wi-Fi and cellular modem.


IBM Announces Blockchain Cloud Services on LinuxOne Server

A new cloud environment for business-to-business networks announced by IBM last week will allow companies to test performance, privacy, and interoperability of their blockchain ecosystems within a secure environment, the company said. Based on IBM’s LinuxONE, a Linux-only server designed for high-security projects, the new cloud environment will let enterprises test and run blockchain projects that handle private data for their customers. The service is still in limited beta, so IBM clients will not be able to get their hands on it just yet. Once it launches, however, the company said clients will be able to run blockchain in production environments that let them quickly and easily access secure, partitioned blockchain networks.


Bad UX kills

Great experiences don’t have to be complex: One of the greatest innovations in transit user experience in the past 50 years is not the autonomous car or the hyperloop, but rather a sign on a train that says “Quiet Car.” This simple piece of vinyl has an immense ROI, having made a positive impact on hundreds of thousands of commuters, allowing them to catch up on precious sleep or focus intently, fundamentally altering commutes from lost time into productive hours. The Pentagram-designed “LOOK!” warnings painted on the street at crossings is another lightweight, ingenious improvement. Its eyes prompt you to look the way they are pointing, and have likely saved countless cell phone zombies and tourists from getting run over by a taxi or bus, not to mention clearing the way for city emergency response resources.


Intro to knysa: Async-Await Style PhantomJS Scripting

PhantomJS is a modern headless (no GUI) browser scriptable with a JavaScript API. It’s perfect for page automation and testing. The JavaScript API is brilliant, offering many advantages but it also suffers from the same “callback hell” problem with JavaScript, i.e. deep nested callbacks.  There are many libraries and frameworks to help deal with this problem. For PhantomJS, CasperJS is one such solution that is very popular, but it only mitigates the problem and does not solve it. knysa, on the other hand, solves the problem elegantly. Like CasperJS, it allows you to put steps in sequence. Unlike CasperJS, it does not add a lot of boilerplate code (e.g. casper.then(), etc.).


Optimizing Dashboard Design to Drive Action

When a dashboard is working well, it focuses each recipient on how they can specifically impact organizational core metrics, or Key Performance Indicators (KPIs) such as retention, conversion and lifetime value. Before you build your first chart, understand the context in which your initiative operates. What are the core metrics your company cares about? What are the existing dashboards your executives look at every day? Make sure your data includes a semi-live feed of these core metrics so you can display them in your dashboard. This information is vital to an effective dashboard. Analyze your data to identify the correlations that will answer the “why” for action. Include customer sentiment data so you can identify the path from your organization’s activities, through customer sentiment and behavior, to resulting KPIs.


Facebook's giant solar-powered drone takes flight to deliver internet to remote areas

According to a blog post by Jay Parikh, global head of engineering and infrastructure at Facebook, this was the first time the team had been able to fly the full-sized aircraft. The low-altitude flight lasted longer than 90 minutes, which was three times longer than had originally been planned for. The flight took place in Yuma, AZ. "When complete, Aquila will be able to circle a region up to 60 miles in diameter, beaming connectivity down from an altitude of more than 60,000 feet using laser communications and millimeter wave systems. Aquila is designed to be hyper efficient, so it can fly for up to three months at a time," Parikh wrote. While some refer to Aquila as a drone, being that it is unmanned, Facebook refers to it as "a high-altitude, long-endurance, unmanned solar-powered airplane."



Quote for the day:


“If we wait until we’re ready, we’ll be waiting for the rest of our lives.” -- Lemony Snicket


July 22, 2016

Internet of Things: From sensing to doing

The value that IoT brings lies in the information it creates. It has powerful potential for boosting analytics efforts. Strategically deployed, analytics can help organizations translate IoT’s digital data into meaningful insights that can be used to develop new products, offerings, and business models. IoT can provide a line of sight into the world outside company walls, and help strategists and decision makers understand their customers, products, and markets more clearly. And IoT can drive so much more—including opportunities to integrate and automate business processes in ways never before possible.


Software-Defined Everything: Beyond the Cloud

Software-Defined Compute is expanding past now-traditional virtualization into containers. SDN is branching out of the Cloud providers and telco infrastructure into enterprise networking. And SDS is building upon core storage abstractions like object storage, database storage, and elastic block storage to a range of data virtualization and orchestration capabilities that support Big Data use cases as well as traditional enterprise “small” data needs.In fact, vendors like Primary Data are extending this SDS vision by essentially building a Software-Defined abstraction on top of Cloud-centric storage abstractions. With Primary Data, an enterprise doesn’t have to worry whether underlying storage is object storage or database storage, for example, simplifying Hybrid Cloud scenarios and complex tasks like Big Data processing and software upgrades.


Top 10 Considerations for Efficient IoT Deployments in Smart Cities

Citizens are core to the success of any technology implementation done in the context of a city. As they are the main consumer and the biggest beneficiary of this solution, their involvement in the solution is highly critical. Many countries have adopted the concept of “Create or Join a Project”, which aims at involving citizens at the very early stages of conceptualization and then implementation. Citizens are not just any other involvement, they are actually a major source of data that is fed back to the system during the implementation process. For example, a broken Water pipe, can be bought to the quick attention of the system if the solution provides a provision to allow the citizen to upload an image and the location of the broken water pipe. The same can be applied for a broken street light or a possible security breach.


Cloud Computing's Big, Disruptive Multiple Hundred Billion Dollar Impact

Companies that sell hardware and software to corporate customers are all threatened by this shift. In the old days, a company would sell an operating system and software for each user. In the cloud realm, operating system are parcelled out on shared servers for use on a pay by the hour basis. Public cloud deployment is seen as a godsend for small companies, which used to have to spend almost all of their initial funding on servers and software. AWS upended that model to let startups get going fast and cheap by paying pennies per hour for computing power. However, the notion that public cloud is always the cheapest option once startups get big, is still debatable. Once a company hits a certain size and has to deal with lots of data, some analysts and corporate execs say it’s time to bring IT back in-house because cloud has gotten too pricey


Cyber security basics: 4 best practices for stopping the insider threat

The insider threat, simply meaning a threat that comes from within an organisation, is a growing concern for cyber security practitioners. Unlike with external threats such as hackers or the latest malware, organisations can not simply buy a shiny new antivirus or firewall product and rest assured that they have it covered. This is because the insider threat can follow any number of patterns. There are both malicious and inadvertent insider threat actors in abundance. On the inadvertent side, 65 percent of office workers use a single password among applications, according to the 2016 Market Pulse report commissioned by SailPoint. The survey also found that a third of employees shared passwords with co-workers, while 26 percent admitted to uploading sensitive information to cloud apps with the aim of sharing it outside the company.


GOP cyber platform "detrimental to global stability"

“There is a distinct lack of clarity about rules of the road for peacetime, and the norms and laws that do and will govern offensive cyber operations in peacetime [are] still highly malleable,” explained Robert Morgus, a policy analyst with D.C.-based think tank New America. “This means that operations conducted by the U.S. and others are highly influential in shaping those rules, and pushing the red line too far — while useful for short-term strategic goals like disrupting the Iranian nuclear program — may prove detrimental to global stability in the long run,” he added.  ... “it’s important to draw a line between offensive cyber operations conducted for espionage or intelligence gathering purposes and offensive computer network operations,” he said.


Google Sprints Ahead in AI Building Blocks, Leaving Rivals Wary

"It’s the next big area, and people are worried Google’s going to own the show," said Ed Lazowska, a computer science professor at the University of Washington who has served on the technical advisory board of Microsoft Corp.’s research lab. "There is a network effect, and it’s a really excellent system." Google initially used TensorFlow internally for products like its Inbox and Photos apps. The company made it available for free in November. Technology companies like Microsoft Corp., Amazon.com Inc. and Samsung Electronics Co. rushed to give away their own versions, hoping to get the most outside developers using their standards.  The company that wins will benefit from the collective efforts of thousands of developers using, but also updating and improving, its system. That’s an advantage when it comes time to make money from the new asset.


Cloud Services Now Account For A Third Of IT Outsourcing Market

We’ve known for some time now that the as-a-service sector has been eating into the market share of traditional service providers. How else to explain that contract counts are soaring, but contract values are remaining relatively stagnant in the traditional market? We knew anecdotally that a lot of client work was moving to the public cloud infrastructure and cloud software markets, and we also knew it was time to begin an empirical measurement of that growing shift. That’s why we decided to move beyond our initial examinations of this phenomenon and officially expand the coverage of our [index]. The drivers for cloud have changed noticeably over the past three years. Initially, cloud interest and adoption was concentrated primarily on cost reduction, in line with what we traditionally have seen as a driver for outsourcing.


Trojanized Remote-Access Tool Spreads Malware

"There is no problem with detecting the malware," Vasily Berdnikov, a security expert at Kaspersky, tells Information Security Media Group. "The problem is that, in this case, the malware came packed with legitimate software. The thinking behind this strategy is simple: Criminals expect that the system administrator will simply ignore the warning from the security solution, because he will be sure that he is downloading legitimate software from the legitimate source." Attackers have long favored gaining access to remote-access tools present inside victim organizations, because they provide an easy way to remotely launch further attacks or exfiltrate data. But Berdnikov says this is the first time Kaspersky's researchers have seen a criminal group hide malware inside a legitimate remote-access tool.


Effective Third-Party Risk Assessment – A Balancing Process

The very practical need for thorough third-party assessments is the fact that third-parties are increasingly targeted by criminals, and continue to be the primary source of breach incidents. Rather than attempt to breach the systems of large and usually well protected company networks, criminals look for the weakest link in the chain, which is all too often a third-party. The growing demand for more comprehensive third-party assessments necessarily requires expanded resources, budgets and timelines for completion. These needs run contrary to very real budget and staff constraints, and the pace at which business units need to bring new (often web/cloud based) products and services to market. So, how do you satisfy the growing demand for more comprehensive assessments of third-party risk controls without substantially increasing the cost and time for conducting assessments?



Quote for the day:


"In the realm of ideas everything depends on enthusiasm. In the real world all rests on perseverance." -- Johann Wolfgang von Goethe


July 21, 2016

Cognitive Business: When Cloud and Cognitive Computing Merge

Another maybe even more important trend, that is actually being driven by cloud computing, is the rapid expansion of cognitive computing. In this arena, IBM’s Watson, famously known for defeating Jeopardy gameshow champions Ken Jennings and Brad Rutter, has quickly established itself as a commercial cognitive computing powerhouse. Contemporary reports of the Jeopardy contest from the New York Times cited this victory as IBM’s “…proof that the company has taken a big step toward a world in which intelligent machines will understand and respond to humans, and perhaps inevitably, replace some of them”. Although we are not yet at the human replacement stage, the merger of cloud and cognitive computing is rocking the business status quo.


The State of Digital Currency: A discussion with Ed Scheidt

One of the keys to acceptance is the ability to check validity of currency and reduce the risk of fraud. We discussed the fact that even with block-chain and other types of encryption, there needs to be new technology invented that provides the same level of trust (and risk reduction) that you get with physical currency. If you look at the current one-hundred dollar bill, it has a myriad of security features like a 3-d ribbon, color-shifting ink, watermarks, raised printing, etc. All of these features could be reproduced by a counterfeiter, but only with a large amount of time and resources. DC has none of these layered features in a mature way today, but will someday. So, for DC to really work, the digital equivalents of these features will need to be created, validated, produced, and trusted.


Blockchain: a case for the general ledger

Despite the potential of a distributed ledger, financial institutions are not rushing to replace legacy systems with the new technology. Blockchain or its variants will be adopted in a bigger scale only after early movers address underlying questions. Will a distributed network operate as efficaciously as the tried and tested centralised system? Can blockchain ensure interoperability? Who is responsible in the event of a dysfunctional system? How will cryptocurrency and related technology be regulated?  Fortunately, the industry is not waiting for answers. Several financial services enterprises are developing in-house models and forging partnerships to create proofs of concept. Venture capital is pouring into start-ups building payment platforms using cryptography even as industry leaders incorporate blockchain technology into securities management, post-trade processing, settlement, and asset servicing.


Looking Deeper, Seeing More: A Multilayer Map of the Financial System

Multilayer maps can capture more information.7 They portray the financial system as a network of networks. For example, a multilayer map can help identify a large market participant that is a node in more than one market layer. Such a company could be a source of strength to the financial system, if managed well. If not, it could be a source of weakness. The failure of one of these nodes in a layer can lead to failures of dependent nodes in other layers. This phenomenon can happen repeatedly, leading to a cascade of failures. For that reason, multilayer networks are more fragile than single-layer networks. Connections between the layers can amplify the scope and magnitude of stress in a single layer. Maps of multilayer networks show three stages of damage following a major shock.


Utah teen launches consumer drone that can fly over 70 mph

"After spending an hour with George, I was overwhelmingly impressed by his vision for a drone platform as well as his presence as an entrepreneur," wrote Ben Lambert, from Pelion Venture Partners, in a post on Medium. George clearly has an engineering mindset, but he's also a savvy businessman. When he was a kid (which wasn't that long ago, after all), he always had lemonade stands or some other way to make a few bucks. "I was always an entrepreneur at heart," he told me. In these early days, Teal has been operating out of Pelion's Salt Lake City office. George says he's managing to stay grounded while handling large responsibility at such a young age with the support from his family and school, but he also mentioned "half-jokingly" that spending quality time with his investors has helped. "Ben tells me every day that I suck," George said.


10 TB in a 1 cm space: Will chlorine atoms redefine storage?

The technology is dependent on the ability to quickly rearrange in square grids that sit next to each other as terraces. Each grid represents a single byte, and it contains slots that the atoms can be moved around in to represent either a one or a zero, thereby encoding the information. The atoms are moved between slots using a scanning tunnelling microscope. Atomic markers were added to the grids, making reading them easier and faster than previous methods. This new atomic storage technology is a major discovery, but it is still in the proof-of-principle phase, and it has some major drawbacks that may slow its development. One of the biggest issues is that it must be kept at the temperature of -196 °C, which is the the boiling point of liquid nitrogen. While warmer and cheaper than using liquid helium as a coolant, as noted in Nature, it still creates a problem.


Mads Torgersen on C# 7 and Beyond

QCon chair Wesley Reisz talks to Mads Torgersen who leads the C# language design process at Microsoft, where he has been involved in five versions of C#, and also contributed to TypeScript, Visual Basic, Roslyn and LINQ. Before he joined Microsoft a decade ago, he worked as a university professor in Aarhus, Denmark, doing research into programming language design and contributing to Java generics. Key takeaways are: The overall theme for C# 7 will be features that make it easier to work with data, including language level support for tuples. The release may also include pattern matching for type switching; C# 7 is the first new release of the language to be completely built in the open; Roslyn, the compiler and API, allows a much more agile evolution of the language.


Securing the NextGen aviation network

In the past, we were very, very focused. We had a very simple model, which was we would look at how our system is secured and if somebody else was having a technological problem on their side the way we would protect the integrity and the safety of the system was we simply wouldn't allow them in. That would result if airline A is having technology problems, we're not going to dispatch their flights. To a certain extent we still do some of that but now that all of our systems are interlinked, if an airline is experiencing a problem it's very important that we understand what is the potential that that could bleed over into our systems through the interconnections and gateways that we have connecting our system to theirs. Likewise, it's not just the companies and their operating systems. It's also the avionics systems in the aircraft themselves.


Doctor devises new database methodology to thwart hackers and end big data breaches

Yasnoff created the personal grid, in fact, to make it so each record of information is stored in a separate file, and each files is encrypted individually with its own encryption key.  “If a hacker breaks into a server room and literally takes a whole server away, that hacker would have to break through strong encryption to get one single patient record,” Yasnoff explained. “And then that hacker would have to break through more strong encryption to get a second record, and then repeat the same for a third, and a fourth, and so on. The work involved in getting hundreds of thousands to millions of records becomes prohibitively massive for a hacker.” There is, however, one catch: Unlike a database where all records are stored in one file, a clinician cannot quickly search patient records stored and encrypted separately within a database. But Yasnoff has come up with a solution to this hurdle.


Oracle To Reboot Java EE For The Cloud

Within cloud-based environments, infrastructure no longer relies on application servers running on dedicated hardware. Moreover, an enormous volume of transactions must be handled, requiring a different model for state and transaction management than what has been offered in Java EE for scaling applications, Kurian said. Meanwhile, container technologies such as Docker have emerged, with requirements for externalizing configuration management, deployment of applications, and packaging. Oracle wants to make accommodations for these paradigm changes. Oracle plans to fit Java EE 8 with capabilities for persisting data in a key-value store, based on NoSQL stores, and a transaction model for eventual consistency and relaxed transactions. On the whole, Oracle's improvements would help Java EE developers evolve their skill sets to leverage technology shifts such as these, Kurian said.



Quote for the day:


"You got to be careful if you don't know where you are going, because you might not get there." --Yogi Berra


July 20, 2016

The Body as Interface and Interpreting the Body talks

It helps us move away from viewing things in terms of the interfaces we are familiar with. For instance, we were able to provide an alternative to the mouse by introducing touch screens. We then moved from touch screens to more gestural interfaces with the Kinect and virtual reality goggles. We need to build devices that give users greater autonomy to determine where they go with the design. Thinking of the body as an interface and designing with that mindset, lends itself to a more experimental and iterative approach to design. ... You can track metrics like temperature, heart rate, blood pressure or breathing rate, to stop traders trading when they’re more likely to make a decision based on emotion. Emotion sensors can allow users to better control their behaviour in emotionally charged situations.


MicroProfile streamlines Java EE for microservices

The MicroProfile approach to optimizing for microservices is to start with a small core set of features and grow from there with heavy involvement from the community. The core platform will likely add functionality over time, some of which will come from Java EE related JSRs, and some that are not directly related to Java EE at all. For the latter, the MicroProfile community will investigate how to more directly address microservice-related patterns like circuit breakers, bulkheads and service discovery. The MicroProfile project aims to get Java EE back on the edge of innovation, Sharples said. "The goal is to ensure that when developers think about microservices they start with Java and Java EE; this enables them to start with the standards-based platform with familiar Java APIs."


CIOs and CISOs share insights on strategic collaboration

All executives with a C-level title should be working together toward the mission, said Mansur Hasib, program chair for cybersecurity technology at the Graduate School at the University of Maryland University College and author of the books “Cybersecurity Leadership” and “The Impact of Security Culture on Security Compliance.” ... "The C-level officers should be sitting together and offering each perspective on how to achieve this particular goal. The CIO might say, ‘OK, to do this we need to have a webinar, and we might need connections with the mayor’s office and maybe the state department of health.’ Another officer could say, ‘We need to put some ads in the newspaper,’ and someone else might say, ‘We need some town halls because consumers do not have technology for webinars, and further, maybe some door-to-door canvassing.’"


Why Virtual Reality is Auto Marketing's 'Sleeping Giant'

Automakers are also bringing virtual reality inside dealerships. For instance, Audi is rolling out VR systems at dealerships that allow customers to experience vehicles in various environments or to "virtually dive into specific parts of the vehicle and explore their technical design," according to Audi's website. "You're wearing the glasses and you really think you're in the car," Marcus Kuehne, Audi's virtual-reality project lead,told Bloomberg earlier this year. "You get a good feeling for the size -- do the rims fit to the body of the car, do the colors inside the car fit well together?" he added. "You can judge this much better through this technology than on a screen."


Microsoft is rolling out Windows 10 as a subscription service

At the enterprise level, Microsoft has always charged businesses for using Windows. The upgrade to Windows 10 from Windows 7 or 8 may be free, but the continued use of Windows in your business has never been free, nor should it be. The new twist in the conversation is that the fees for using Windows will be called a subscription now. Hardly earth shattering. At the consumer level, the future prospects of Windows 10 and the subscription model are much murkier. Where enterprises are willing to pay for more security assurances and management services, consumers may fail to see the value and resist a monthly fee. Microsoft knows this and will look for ways to mitigate such entrenched resistance.


Could Bulgaria's open source law transform government software worldwide?

The advantages of going open source are numerous, Bozhanov says. Most importantly, the new legislation will bring better written software, and developers will follow better practices. "Currently there's nobody inspecting the quality of the code or the architecture, and companies can get away with pretty low-quality solutions," he says. Open source will also offer more affordable software, with less money spent on support and fewer new projects commissioned simply because the old ones didn't work properly. Also, government contractors will be able to reuse the code when working on a common piece of functionality, without having to reinvent the wheel every time. "Companies will no longer be able to sell open-source solutions as complex custom software, which has [previously] happened," Bozhanov says.


The best mobile security plans examine risks first, then prescribe

The balance between risk and control is exacerbated when applied to mobile devices. Mobile devices (smart phones and tablets) are, by their very nature, designed to blend the organization and personal computing experiences. My phone is filled with personal photos and photos of whiteboard architecture and flow diagrams. My apps include my corporate email and expense approval as well as my personal mobile banking. ... When it comes to assessing risks, I like to first identify the specific risks and then, for each risk, define the likelihood and impact of the risk. I then figure out the best, most pragmatic way to mitigate the risks with the highest likelihood-impact combination.


Why ALM Is Becoming Indispensable in Safety-Critical Product Development

When developing complex software systems before, especially in scaled Agile environments, these issues are quite common. That's exactly the need that gave rise to the notion of Application Lifecycle Management. ALM tools help developers oversee and manage several (ideally all) stages of development using a single software solution. By design, they offer functionality across the entire lifecycle, supporting development from requirements to release. While ALM is a relatively modern concept, ALM solutions have been around for a decade or so, and have evolved a lot over the years. Some ALM vendors started out as developers of single-point solutions, and have developed further modules to add to the basic functionality of their products, or have acquired other solutions and created integrations between these preexisting modules.


Internet of Things in healthcare: What's next for IoT technology in the health sector

Inova Design's CEO Leon Marsh agrees: "The potential with IoT is that throughout a whole care pathway a person's data is continuously being gathered and used to help diagnose the patient so they can receive the best treatment as quickly as possible." Ideally, the objective data that could be taken from a network of IoT devices will also be able to significantly lower margins of error. And in the predictive realm, it could, for example, be able to detect the onset of a wide range of health issues, from high blood pressure to early signs of delirium. Emergency admissions could then, in theory, be reduced - with proactive health systems in place to address the problems before they become more serious or irreversible. More generally, data from a network of IoT devices has the potential to transform the check-in process, filling in past health data for professionals to review automatically.


Container Management Simplifies SDN Application Deployment

One of the problems that SDN companies attempted to tackle is the issue of firewall rule explosion. Firewall access control lists (ACLs) are notoriously difficult to understand and process. For example, a customer I worked with at a former company had 50,000 firewall rules on a single firewall device and they did not know if they could remove any one rule without breaking an application! Load balancers have similar problems as firewalls. With hundreds of applications, come thousands of rules that must reside in a single hardware load balancer. Clearly, there is a problem. One way to attempt to solve this problem is to create network application centricity. There are many network IT vendors that claim application-centric infrastructure and networking.



Quote for the day:


"The more that you read, the more things you will know. The more that you learn, the more places you’ll go." -- Dr. Seuss


July 19, 2016

Cybersecurity control a concern for digital businesses

Gartner predicts that by 2018, 25% of corporate data traffic will bypass enterprise security controls and flow directly to the cloud from mobile devices. With data no longer restricted to data centers, it is important to stop trying to control information and instead determine how it flows, Pratap added. “Finding all sensitive data and tracking all access in all forms will be too onerous for most organizations,” she said. “Each organization will have to manage their ability to do this within the limits of the resources they can commit. From personally identifiable information to sensitive intellectual property, the impact of compromise of such information on the organization needs to be assessed regularly.”


From Pig to Spark: An Easy Journey to Spark for Apache Pig Developers

Pig has a lot of qualities: it is stable, scales very well, and integrates natively with the Hive metastore HCatalog. By describing each step atomically, it minimizes conceptual bugs that you often find in complicated SQL code. But sometimes, Pig has some limitations that makes it a poor programming paradigm to fit your needs. The three main limitations are : Pig is a pipeline and doesn’t offer loops or code indirections (IF..THEN) which can sometimes be mandatory in your code. ... Finally, a third Pig limitation is related to input data formats: whereas Pig is good with CSV and HCatalog, it seems a bit less comfortable with reading and processing some other data formats like JSON (through JsonLoader), whereas Spark integrates them natively.


Insurance is ready for an upgrade

Before too long, IoT may enable carriers to become primarily the ensurers of safety and productive use of properties, rather than just the insurers of damages should a loss occur. If IoT detects the imminent failure of a $100 compressor in a $1 million piece of equipment that prevents a $100 million business-interruption loss, an entirely new value chain is created. If carriers don’t seize the moment, outside tech firms could launch IoT platforms that already have an ingrained risk-transfer component, thereby beating insurers at their own game. Nor are life insurers immune to the disruptions caused by enhanced connectivity. More life carriers will likely take the plunge into telematics, including some utilizing a fitness-monitoring device to award points for those who exhibit healthy behaviors, thereby allowing policyholders to earn premium discounts and other rewards while facilitating a richer, more holistic relationship with their insurer.


Introduction to data-science tools in Bluemix – Part 3

A big part of any data science activity is learning how to put the data in a format that helps you gain insight. A common task is looking at the data in time segments, joining them on date patterns or time of year dates. In this recipe we will look at how we transform dates so they can be used as date formats rather than text strings. In addition we will look as joining data frames from multiple data sources. ... You will notice that the date is in format of “MMMM-YY”, this is a concern because the year is not specific. Because I know the data, I have made a rule in this case that everything less than 20 is for the year 2000 and beyond. Everything 20 and above is for the 1900’s. The next concern is that I need my date format in “YYYY_MM-DD” format and there is no “days” in the source date. I am going to default it to “01”


Europe Builds a Network for the Internet of Things. Will the Devices Follow?

For growth to accelerate, says de Smit, a few things are necessary. The first is for the KPN network to enable location-based features, which would, for instance, allow a shipping container to be tracked in transit across the country—something expected to go live before the end of 2016. The second is IoT coverage beyond national borders. Siemens, Shimano, and other large companies are very interested in gaining access to IoT networks, but only when there is enough geographic coverage, says de Smit. That may take a few years. KPN is not the only company building out the IoT. SigFox, a French startup, claims its competing wireless grid already covers 340 million people in parts of 22 countries. The company raised well over $100 million in investment in 2015 alone, and is using the money to expand as rapidly as possible.


Red Hat Shoots to Solve Container Storage with Gluster and OpenShift

The integration translates to another option for storing data inside containers. That’s important because, to date, other persistent storage solutions for containers have tended to be clunky. Here’s why: Docker containers are ephemeral. They spin up and down as needed, which is what makes containerized infrastructure so scalable and agile. But it also makes it hard to store data persistently, since you can’t store permanent data inside containers very effectively if the containers themselves are not permanent. Previous attempts to solve this conundrum have centered on creating special containers dedicated to storage, or allowing containerized apps to access storage on the host system.


Organising for Analytics Success - Centralising vs. Decentralising

As we know the analytics team needs to have an acute understanding of the business and business unit they are working in. To be able to build models and derive insights its important that there is some context to the objectives of the business unit as well as the problem the analytics team is solving for. It's based on this premise, then, that many Heads of Analytics (and similar) believe that analytics has to be decentralised. Deploy a Head of Analytics into each business unit, allow them to work alongside the business owners and build insights with specific knowledge of the customer and the product. This structure makes perfect sense. Except when you take into account that there is a distinct lack of skills when it comes to people who can build advanced analytical models; and understand business; and have the ability to lead a team and engage with business.


Chief data officer job stakes claim in data innovation

We forget, but, before big data and analytics became the mainstays, shops would take all of their data out of transactional systems, build a data warehouse, do some data cleansing and run some reports and, maybe, if you were really, really good, that could become the golden copy of your data, which you could send back to your applications. That's what we called the closed loop. It was data warehouse nirvana. But the IT and application development groups would have their release cycles, and the data warehouse group would have its release cycles. Never the two would meet, and they didn't really care about each other. Now, the big data platform has really become the back end of some of the applications, especially for analytics like recommendation engines and applications that measure customers' propensity to buy.


5 steps to avoid overcommitting resources on your IT projects

Maureen Carlson, Partner, Appleseed Partners, says, "Not enough companies are connecting the dots about the impact of resource overcommitment and the ability to deliver on innovation to meet growth objectives. The research shows that companies are working on products or projects that are at risk of delayed delivery because there was not enough capacity to take them on in the first place. Mature organizations are in a position to evaluate capacity in real-time to make critical business tradeoffs and see continued investment in this area as a competitive differentiator." ... PMOs play a crucial role in assisting organizations with strategy and execution and as such must recognize the need for effective resource management and capacity planning.


Has open source become the default business model for enterprise software?

When it comes building the business, open source and proprietary are the same -- but different. The biggest difference is starting points. The proprietary software company starts with an idea that is refined based on identifying customer pain points and classic gap analysis. With open source, the trigger is less formal, because at the outset, the primary risk is sweat equity. Somebody gets an idea, develops it in the wild, and in place of gap analysis, there's the sink-or-swim process of developer interest going viral. But, ultimately, both need to deliver some unique value-add, scale it, and go to market. There is the neatness, or lack thereof, of the open-source model. Witness the long tail of adoption of Android updates, or the ordered disorder of the Hadoop platform, where each commercial platform has different mixes and matches of open-source projects.



Quote for the day:


"To double your net worth, double your self-worth. Because you will never exceed the height of your self-image." -- Robin Sharma


July 18, 2016

How big data is having a 'mind-blowing' impact on medicine

Research by Ericsson predicted that, while currently only 27% of the population in Africa has access to the internet, data traffic is already predicted to increase 20-fold by 2019—double the growth rate compared to the rest of the world. Terheyden explained while infrastructure may be rather basic in places such as Africa, and some improvements still need to be made around issues such as bandwidth, telehealth has already begun to open up new opportunities, so much so that when compared to the way medicine is practiced in developed countries, it appears archaic. "I know there are still some challenges with bandwidth...but that to me is a very short term problem," he said. "I think we've started to see some of the infrastructure that people are advocating that would completely blow that out of the water.


Harnessing the Data Tipping Point of IoT

It was not a huge leap for the industry to realize that an IoT global network of continuously connected devices would mean that data would not only be created at geometric rates, but that it would become one of the most valuable commodities in the world. And although there are many new start-up companies storing, analyzing and integrating massive lakes of big data created from the IoT, not many have actually considered how the IoT will transform how organizations think and implement data quality and information governance. Wikipedia defines information governance as a set of core disciplinary structure, policies, procedures, processes and controls implemented to manage information at an enterprise level, supporting an organization’s immediate and future regulatory, legal, risk and environmental and operation requirements.


Post-Brexit fintech – don’t just sit there, do something

It has been rightly said that nothing much will happen for a couple of years. Those who said that to reassure don’t know much about the markets. For the FS world, “nothing much happening” while we wait, slow evolution towards an unknown status is the worst kind of climate. Uncertainty isn’t a friend of the markets, and to navigate it, banks and FS institutions will retrench, limit “nice to have” activity, stretch timelines for experiments and investment cycles, diversify and minimise exposure and risk. The aspirational and experimental initiatives will be the first to suffer, not because banks suddenly don’t care, but because they need to protect their staff, shareholders and regulatory standing in order to still be around at the end of the storm to still have aspirations.


French public sector’s never-ending struggle with the cloud

There is no question that the French public sector could benefit from cloud technology. It could allow the government to consolidate its IT resources. Different government agencies currently have their own datacentres, and none of them are used at full capacity. As cloud computing is perfectly suited to fluctuations in capacity, scaling up and down would be quick and easy. It would also provide the flexibility required to implement new services more quickly. As government agencies develop new programmes, they could rapidly implement and deploy the applications to support them. Acknowledging this untapped opportunity, the French government last year released details of a two-pronged strategy to move data and services to the cloud.


Blockchain market outlook: Hype vs. reality

A blockchain system can link to the financial systems of all those distributed companies' ERP systems and/or link to their bank accounts directly and understand the exact cash position at any given time across the enterprise across the globe. You could actually watch it go up and down in real time practically. That kind of knowledge is pretty valuable, and the blockchain enables the financial arm of a company instant or almost-instant access to current information about current assets [and] cash positions and can rapidly roll up consolidated financials. ... So blockchain just gets added to the environment. You don't have to buy anything or redo anything. You just need to integrate at certain points the various financial systems to the blockchain so that it exposes its data to it.


Balancing the Demands of Big Data With Those Of Accurate Data

Gaming, ad tech and e-commerce are three examples of industries that have fully entered the multiverse of heavy, high-value workloads. For these industries, it is equally important to process massive numbers of transactions and retain complete data accuracy, but traditional database technology puts these aims at odds with each other. If B-grade sci-fi has taught us anything, it is that messing with nature—whether it be tweaking with the space-time continuum, or resurrecting dinosaurs—is dangerous business. Similarly, the kind of coding changes required to scale MySQL beyond its natural limits (such as those involved in sharding), or to ensure complete data integrity in NOSQL, wreak havoc on the very applications that the databases are supposed to power, making them fragile and much more complex to manage.


Artificial Intelligence Swarms Silicon Valley on Wings and Wheels

“Whenever there is a new idea, the valley swarms it,” said Jen-Hsun Huang, chief executive of Nvidia, a chip maker that was founded to make graphic processors for the video game business but that has turned decisively toward artificial intelligence applications in the last year. “But you have to wait for a good idea, and good ideas don’t happen every day.” By contrast, funding for social media start-ups peaked in 2011 before plunging. That year, venture capital firms made 66 social media deals and pumped in $2.4 billion. So far this year, there have been just 10 social media investments, totaling $6.9 million, according to CB Insights. Last month, the professional social networking site LinkedIn was sold to Microsoft for $26.2 billion, underscoring that social media has become a mature market sector.


How to go beyond the reboot to provide topnotch tech support

You head out to your vehicle to start your morning commute. You turn the key and the car doesn't start. What's the first thing you do? You grab a container of gasoline, right? No! Well not initially. You will more than likely take other steps to troubleshoot why your car doesn't start. You may check to see if the headlights can come on or turn the key to see if the vehicle's starter turns over. At any rate, your first step is not to put gasoline in the tank. Having a fair understanding of how vehicles operate aided you in your triage. Why not apply this to IT support? When a user gets an error submitting an online form, restarting the browser won't resolve the issue. Analyzing the error message may open a door to a resolution. The user may have been entering text into a numeric field of the online form.


11 Programming Languages For DevOps Success

DevOps depends on two critical pieces: Software development and operational automation. Each of these requires programming and (follow me, here) programming tends to need a programming language. For those trying to chart a career path in DevOps, the question of what language or languages to learn for each side of the equation is key. ... Are you on a DevOps team? Have you led part of a DevOps organization? I'm curious about the tools you or your team have used as part of successful DevOps. I'm equally curious about languages you think are important for people getting into the field in 2016. I'll be hanging out in the comments section below -- once you've reviewed our list, stop by and let me know what you think.


Skills gap leaves firms at risk from cyber attacks

“An insufficient number of specialists entering the IT market has forced organisations to consider effective retention programmes, training existing staff, partnering with educational institutions and developing flexible hiring policies that include both permanent and contract specialists,” he added. The technology sector as a whole is suffering from a skills gap, with many people resorting to up-skilling internal employees to fill vacant roles. Firms are increasingly looking for people with soft skills as well as technical skills. Owen said providing insights from data analysis and communicating IT security issues to others in the firm are important for an IT security employee. Cloud security skills are the most sought after for IT firms, but they are also the most challenging to find among potential candidates.



Quote for the day:


"The greatest single human gift - the ability to chase down our dreams." -- Prof. Hobby


July 17, 2016

Windows Containers and Docker

The Windows Server container shares the kernel with the OS running on the host machine, which means all containers running on that machine share the same kernel. At the same time, each container maintains its own view of the OS, registry, file system, IP address, and other components, with isolation provided to each container through process, namespace, and resource control technologies. The Windows Server container is well suited for situations in which the host OS and containerized applications all lie within the same trust boundary, such as applications that span multiple containers or make up a shared service. However, Windows Server containers are also subject to an OS/patch dependency with the host system, which can complicate maintenance and interfere with operations.


BNP's Ex-Blockchain Lead is Now Coding Smart Contracts for Clearinghouses

Along with co-founder and fellow BNP Paribas alum, David Acton, the bootstrapped team has already built a proof-of-concept for contract creation and trade registration that uses a smart contract for US treasuries and other cash-like short-term treasury instruments in Europe. The smart contracts are intended to represent bilateral contracts between parties that are backed by different guarantors, likely a member of the CCP clearinghouse. The smart contracts themselves would be administrated by the CCP. In Europe, CCPs such as the European Central Counterparty NV and Eurex Clearing serve counterparties by both taking funds from a buyer and assets from a seller and managing the risk in a wide range of ways. In the US, the DTCC fulfills a similar function.


The Batch Mode Window Aggregate Operator in SQL Server 2016: Part 2

Besides the general performance advantages of batch mode processing compared to row mode processing, this operator uses a dedicated code path for each window function. Many inefficiencies in the original row mode optimization are removed. For example, the need for an on-disk spool is eliminated by maintaining the window of rows in memory and using a multi-pass process over that window in a streaming fashion. ... Remember that when querying columnstore, sorting for computation of window functions is unavoidable since columnstore data isn’t ordered; however, you do get the benefits of reduced I/O cost, batch mode processing and parallelism, with much better scaling for larger number of CPUs, compared to row mode processing.


SQL Server, Power BI, and R

R has also been integrated into Power BI, allowing you to create fully integrated visualizations with the power of the R language. In this blog post I will show an example using R in SQL Server to create a model and batch score testing data, then use Power BI Desktop to create visualizations of the scored data. Rather than moving your data from the database to an external machine running R, you can now run R scripts directly inside the SQL Server database – train your model inside the database with the full power of CRAN R packages plus the additional features of speed and scalability from the RevoScaleR package. And once you have a model, you can perform batch scoring inside the database as well.


Strengthening the Foundations of Software Architecture

Once the software architecture is in place, it may subsequently be discovered the requirements have changed or were never fully understood. How easy it is to change the software depends on whether it was architected in such a way that alterations don’t significantly conflict with the original design. The more extreme agile methodologies deviate further from this blueprint. Applications are written in smaller slices, delivering value to the user faster but reducing visibility of the overall design. There often isn’t any one individual responsible for designing the architecture, and the decision-making is delegated to the developers incrementally working on the software. Because cycle times are reduced, the team can get feedback faster and respond more quickly to changing requirements, but how easy it is to implement changes still depends on whether they are congruous with the architecture.


Basho Open Sources Time Series Database Riak TS 1.3

Basho is indeed the biggest contributor to Riak TS, primarily because we had to make so many additions and changes to have a purpose built time series solution. We are currently talking to several companies about working on a series of capabilities to Riak TS to solve a large problem in the time series arena. As for the objective of open sourcing the code, we believe that we have a lot to offer the community in terms of innovative approaches, ideas, leadership in distributed systems and want to collaborate to build even better solutions. That process is almost always accelerated when you leverage open source as a path. We have a long history of open sourcing our software and gaining support from the community in creating better solutions


Modeling your big data enterprise architecture after the human body

Think about the storage systems in our brain. We have short-term, sensory, long-term, implicit, and explicit. Why do we have so many? The answer is there was an evolutionary benefit that each system provided over a generalized system. These systems most likely have different indexing strategies, flushing mechanisms, and aging-out/archiving processes. We find a parallel in our world of software architecture, with storage systems like RDBMS, Lucene search engines, NoSQL stores, file systems, block stores, distributed logs, and more. The same goes for our processing systems. Vision interpretation is very different from complex decision-making. Just like the brain, in software architecture, there are different execution patterns and optimizations that serve different use cases. Tools like SQL, Spark, SPARQL, NoSQL APIs, search queries, and many more. There is a reason for the different approaches to processing, and there will be more approaches in the future as we find different ways to address our problems.


How Cardihab uses data to speed recovery of cardiac patients

Cardihab is a spin-off company from the Commonwealth Science and Industrial Research Organisation, and is also a participant of the HCF Catalyst accelerator program. He explained a key problem behind why people do not complete their CR program is due to accessibility and convenience. "The way normal cardiac rehab works is it's usually a 6-8 week long program where the person has to go to a clinic once or twice a week and that can really be inconvenient, especially for patients who have returned to work, or for rural remote patients," McBride said. Cardihab has been designed to collect data about a patient including how many steps a patient has taken, and their blood pressure and sugar levels, via Bluetooth-enabled monitors. The information is then uploaded to the cloud and shared with the patient's clinician, who can access it through an online portal.


Deep Learning: Using an Artificial Brain to Protect against Cyberattacks

When applied to cybersecurity, it takes milliseconds to feed a raw data file and pass it through the deep neural network to obtain detection with the highest accuracy rate. This predictive capability of being able to detect a never- before seen malware variant enables not only extremely accurate detection, but also leads the way to real-time prevention because at the very second a malicious file is detected, it is already blocked. Therefore, while traditional machine learning yields better results than signatures and manual heuristics, deep learning has shown groundbreaking results in detecting first-seen malware, even compared with classical machine learning. This observation is consistent with improvements achieved by deep learning in other fields, such as computer vision, speech recognition, text understanding, etc.


SQL Server 2016 Upgrade Planning

If you are using Master Data Services or Data Quality Services, keep in mind an customization will get overwritten. You must back up your MDS and DQS databases before upgrading to prevent any accidental data loss. In SQL Server 2016 , those applications do have schema upgrade level changes. After upgrading to SQL Server 2016, any earlier version of the Add-Ins for Excel will no longer work. You will need to tell your users to download the SQL Server 2016 versions of the Master Data Services or Data Quality Services Add-In for Excel. Integration Services packages do not get automatically updated in a SQL Server upgrade. You will need to migrate packages afterward service upgrade completes with the Integration Services Package Upgrade Wizard. Developers can upgrade 2012 or 2014 projects to 2016 without manual adjustments after upgrade. They can also choose to incrementally update without deploying the whole project.



Quote for the day:

“I do not think that there is any other quality so essential to success of any kind as the quality of perseverance.” -- John D. Rockefeller