Daily Tech Digest - June 08, 2018

malware skull
More than two-thirds of the open Redis servers contained malicious keys and three-quarters of the servers contained malicious values, suggesting that the server is infected. Also, according to Imperva's honeypot data, the infected servers with “backup” keys were attacked from a medium-sized botnet (610 IPs) located in China (86 percent of IPs). Researchers said that the firm's customers were attacked more than 75k times, by 295 IPs that run publicly available Redis servers. The attacks included SQL injection, cross-site scripting, malicious file uploads, remote code executions etc. Researchers said the numbers suggest that attackers are harnessing vulnerable Redis servers to mount further attacks on the attacker's behalf. Nadav Avital, security research team leader at Imperva, said that the reason why 75 percent of open Redis servers are infected with malware was most likely because they are being directly exposed to the internet. “However, this is highly unrecommended and creates huge security risks. To help protect Redis servers from falling victim to these infections, they should never be connected to the internet and, because Redis does not use encryption and stores data in plain text, no sensitive data should ever be stored on the servers,” said Avital.


Artificial intelligence will improve medical treatments


The potential benefits are great. As Tom Devlin, a stroke neurologist at Erlanger, observes, “We know we lose 2m brain cells every minute the clot is there.” Yet the two therapies that can transform outcomes—clot-busting drugs and an operation called a thrombectomy—are rarely used because, by the time a stroke is diagnosed and a surgical team assembled, too much of a patient’s brain has died. Viz.ai’s technology should improve outcomes by identifying urgent cases, alerting on-call specialists and sending them the scans directly. Another area ripe for AI’s assistance is oncology. In February 2017 Andre Esteva of Stanford University and his colleagues used a set of almost 130,000 images to train some artificial-intelligence software to classify skin lesions. So trained, and tested against the opinions of 21 qualified dermatologists, the software could identify both the most common type of skin cancer, and the deadliest type (malignant melanoma), as successfully as the professionals. That was impressive. But now, as described last month in a paper in the Annals of Oncology, there is an AI skin-cancer-detection system that can do better than most dermatologists. Holger Haenssle of the University of Heidelberg, in Germany, pitted an AI system against 58 dermatologists.


In Transforming Their Companies, CIOs Are Changing, Too


The goal of IT isn’t merely to speed operations and introduce new and shinier ways to do things; it’s also to produce qualitative improvements. When an organization generates greater value for customers, everyone wins. For example, at Alaska Airlines, which operates 1,200 daily flights and accommodates 44 million passengers, the focus is on delivering a consistent experience to passengers, employees, and others. Every technology, process, and service touches this concept, which boosts the odds that the airline delivers a “unique brand experience at every touch point, digital and otherwise,” explained Charu Jain, the airline’s vice president and CIO. She constantly works to align the business plan with the technology, she said. Jain accomplishes this by focusing on a few key areas: identifying strategy and priorities; establishing clearly defined metrics; tapping analytics for constant feedback; ensuring that groups and teams are in lockstep with one another; and taking calculated risks, failing fast, and moving forward. Her goal, she said, is to encourage ownership and accountability across the organization. She reinforces progress by “celebrating the small wins accomplished through an innovative spirit.”


Measuring DevOps Success

Speed without quality is of little value to the organization, so the next set of DevOps metrics are the failure rate (the percentage of releases that have problems) and the number of tickets how many issues a release has). Each organization or team needs to find its appropriate balance between speed and quality. The initial focus on quality numbers should be relative trends, not absolute values, to make sure that the teams are progressing in the right direction. Drilling down into failure causes should identify which steps of the process, such as code review or test coverage, need attention. An important detail from bug reports and trouble tickets is whether they are internal, or user reported. Mature DevOps teams have failure rates less than 10 percent, according to the 2017 Puppet Report, and have an increasing percentage of issues captured by monitoring tools before being reported by users. The metrics that underperforming DevOps organizations often miss are those related to a customer’s experience with the application, their usage (or not) of new features, and the resulting impact to the business. These measurements are outside the realm of traditional development and QA tools, and mean changing mindsets and adopting new tools.


Honda Gets Ready For The 4th Industrial Revolution


With the tremendous amount of data that’s created from a wide variety of sources including sensors on cars, customer surveys, smartphones and social media, Honda’s research and development team uses data analytics tools to comb through data sets in order to gain insights it can incorporate into future auto designs. As the company’s big data maturity has increased, its engineers are learning to work with and leverage data, that had previously been to cumbersome to find meaning, thanks to the assistance of big data technology and analytics tools. There are more than 100 Honda R&D engineers who are now trained in big data analytics. Thanks to the sensors on Honda vehicles and feedback from customers, the team is able to make adjustments to the design of its fleet for things they would have never realized were an issue without the data insight. The analytics tools help Honda “explore big data and ultimately design better, smarter, safer automobiles,” said Kyoka Nakagawa, chief engineer TAC, Honda R&D.


AI at Google: our principles

We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions. We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time. ... While this is how we’re choosing to approach AI, we understand there is room for many voices in this conversation. As AI technologies progress, we’ll work with a range of stakeholders to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches. And we will continue to share what we’ve learned to improve AI technologies and practices.


What you need to know about the EU Google antitrust case

Android character at MWC 2014 Barcelona
There's no deadline for the Commission to complete its investigation, but indications from Brussels are that it will publish a decision in the Android case before August 2018. In the Google Android case, the Commission could theoretically fine it up to $11 billion, or 10 percent of parent company Alphabet's $110 billion worldwide revenue in 2017 -- but recent antitrust fines have come nowhere near that level. There's a separate investigation ongoing into the company's AdSense online advertising service, looking at the restrictions it places on the ability of third-party websites to display search ads from its competitors. That could expose the company to a similar-size fine. And, of course, the Commission has already hit Google with one antitrust fine, for abusing the dominance of its search engine to promote its own comparison shopping services. That cost it €2.42 billion ($2.7 billion) in June 2017, around 3 percent of its prior-year revenue. Other recent fines for abuse of a dominant market position are in the same ballpark. In January 2018 it fined Qualcomm €997 million ($1.2 billion), or just under 5 percent of annual revenue, while Intel's €1.06 billion ($1.3 billion) fine back in June 2014 represented about 3.8 percent of revenue.


True Digital Banking Solution For Connected Customers

“The banking sector is undergoing a transformation driven by the change in people’s communication habits. The Internet availability created the need for a completely new approach in communicating with clients and in ways to satisfy the expectations of today’s “connected” client. What we want to achieve with our solutions is straightforward communication and convenient banking which is exactly what Halkbank is delivering to its clients. With the Omni-channel platform in place, Halkbank is more adaptable to change and much quicker in delivering product to the market, always ready to answer to high demands of a modern banking customer.”, stated Mr. Milan Pištalo, Account Executive at NF Innova. “The process of modernization and continuous enhancements of the banking services is constantly present, being a must for the competitive advantage in acquiring new, satisfied clients since both banking and finance sector, on a global level, are particularly dynamic.
People know what to expect from a company, and they precisely know how valuable they are for the company. 


DevOps shops weigh risks of Microsoft-GitHub acquisition


Many developers at Mitchell International have wanted to use GitHub instead of TFS for a long time, Fong said, but whether that enthusiasm persists is unclear. "Microsoft has said it won't disrupt GitHub, but history has shown some influence has to be there," he said. "If it will be a feature of TFS and Visual Studio, some changes will be needed." Dolby Laboratories is accustomed to dealing with Microsoft licensing, but that familiarity has bred contempt, said Thomas Wong, senior director of enterprise applications at the sound system products company in San Francisco. Even if Microsoft doesn't change GitHub's prices or license agreements, "GitHub could become one conversation in an hour versus the whole conversation" in meetings with the vendor's sales reps, Wong said. GitHub already did fine to connect with the broader ecosystem of DevOps tools such as Jenkins for CI/CD, AWS CodeBuild and CodeDeploy automated provisioning, and Atlassian's Jira issue tracker. "That ecosystem is not something I need Microsoft to build for me," Wong said. A large part of GitHub's appeal was that integration with other popular tools, which may now be at risk under Microsoft's ownership, Mitchell International's Fong said.


Network-intelligence platforms, cloud fuel a run on faster Ethernet

20170508 ethernet cabling stock image 1
Of particular interest to most observers is the growing migration to 100G Ethernet. “There was on the order of about 1 million 100G Ethernet ports shipped in 2016, this year we expect somewhere near 12 million to ship,” said Boujelbene. “Hyperscalers certainly drove the market early-on but large enterprises are increasingly looking at that technology for the increased speed, price/performance it brings.” Cisco agreed with that observation. “The requirement for more high-speed ports and more data being driven from the dense edges of the network is driving the upgrade of the backbone,” said Roland Acra, senior vice president and General Manager of Cisco’s Data Center Business Group. “We see the need especially from financial and trade floor customers who need the bandwidth and speed.” While 100G is ramping up so is another level of Ethernet speed – the 25G segment, which saw revenue increase 176 percent year-over-year with port shipments growing 359 percent year over year in 1Q18, according to IDC. The push to 25G is largely due to top-of-rack requirements in dense data-center server access ports.



Quote for the day:


"Uncertainty is a permanent part of the leadership landscape. It never goes away." -- Andy Stanley


Daily Tech Digest - June 07, 2018

Microsoft drops data center into the sea: 'It will keep working for five years'

natickfrance063-768x512.jpg
According to Naval Group, the French defense naval-systems contractor that built Microsoft's pod, the data center has a payload of 12 racks containing 864 servers with a cooling system. After assembly, it was moved by truck to Scotland, from where it was dragged out to sea on a raft and then carefully lowered 117 feet, 35.6 meters, to a rock slab on the seabed. Although the data center is built to last five years, it will remain on the seabed for at least a year as Microsoft observes how it fares. The pod is attached by a cable leading back to the Orkney Islands electricity grid, which supplies 100 percent renewable wind and solar energy to about 10,000 residents. The data center itself needs a quarter of a megawatt. The Natick team also explained the pod's cooling system and how it uses ocean water to cool liquids inside the system. "The interior of the data-center pod consists of standard computer racks with attached heat exchangers, which transfer the heat from the air to some liquid, likely ordinary water," they said.



How to think like a programmer — lessons in problem solving

Do not try to solve one big problem. You will cry. Instead, break it into sub-problems. These sub-problems are much easier to solve. Then, solve each sub-problem one by one. Begin with the simplest. Simplest means you know the answer (or are closer to that answer). After that, simplest means this sub-problem being solved doesn’t depend on others being solved.Once you solved every sub-problem, connect the dots. ... “If I could teach every beginning programmer one problem-solving skill, it would be the ‘reduce the problem technique.’ For example, suppose you’re a new programmer and you’re asked to write a program that reads ten numbers and figures out which number is the third highest. For a brand-new programmer, that can be a tough assignment, even though it only requires basic programming syntax. If you’re stuck, you should reduce the problem to something simpler. Instead of the third-highest number, what about finding the highest overall? Still too tough? What about finding the largest of just three numbers? Or the larger of two? Reduce the problem to the point where you know how to solve it and write the solution. Then expand the problem slightly and rewrite the solution to match, and keep going until you are back where you started.” 


What is pervasive engineering?

Pervasive engineering is physical product (and software) development designed to harness information streams from digitally tracked (typically Internet of Things centric) assets using smart sensors that are connected to an analysis hub of data analytics and management. Pervasive simulation (through the use of digital twins and supporting data analytics) allows (physical product and software) engineers to explore design and product development using real-world conditions. Prototypes can be created to ‘fork’ concepts that skew existing products (or services) while those existing assets remain in working operation, in their core pervasive state. The state of all machine assets is therefore developed continuously, iteratively and pervasively. You can read more here from Ansys on how it positions its approach to this element of design and it is from these pages that we have drawn the above definition. The firm’s SAP partnership embeds Ansys’ pervasive simulation software for digital twins into SAP’s digital supply chain, manufacturing and asset management portfolio. The partnership’s first result is called SAP Predictive Engineering Insights enabled by Ansys and has been built to run on the SAP Cloud Platform.


You’re probably doing your IIoT implementation wrong

You’re probably doing your IIoT implementation wrong
One of the great misconceptions about the IIoT is that it’s a brand-new concept – factory floors and utility stations and other major infrastructure have all been automated to one degree or another for decades. What’s different, however, is the newly interconnected nature of this technology. Steve Hanna, senior principal at Infineon Technologies, said that the security risks of IIoT have grown rapidly of late, thanks to a growing awareness of IIoT attack vectors. A factory that was never designed to be connected to the Internet, with plenty of sensitive legacy equipment that can be 30 years old or older and designed to work via serial cable, can find itself suddenly exposed to the full broadside of remote bad actors, from Anonymous to national governments. “There’s a tool called Shodan that allows you to scan the Internet for connected industrial equipment, and you’d be surprised at the number of positive results that are found with that tool, things like dams and water and sewer systems,” he said. The most common oversights, according to Hanna, are a lack of two-factor identification, allowing hackers to compromise equipment they find via things like Shodan, and direct interconnections between an operational equipment network and the Internet.


Why Microsoft's GitHub Deal Isn't a Sign of the Apocalypse

Image: Pixabay
Despite loud protests and much rending of garments by many in the Minecraft community, the video game sandbox remains as popular as ever. What many GitHub developers fail to realize is that their friendly community was going to be acquired by someone anyway — either that or face eventual liquidation. The private company, CEO-less and still feeling the effects of a workplace discrimination charge, was burning through money with no immediate prospects of additional venture capital funding or launching an IPO. It's just as well that Microsoft stepped forward with piles of cash to make things better. Would GitHub developers feel any more loved in the hands of an Apple, Google or Oracle? Really? Remember, too, that even if Microsoft actually does revert back to its bumbling old days and somehow manages to run GitHub into the ground, the open source coder community is not without viable alternatives. GitLab, a GitHub rival, recently boasted that it has seen a 10-fold increase in the number of developers moving their repositories over to its service. And if GitLab somehow drops the open source ball, it's inevitable all the young stallions will likely find yet another place to hang their backpacks.


What's the difference between low-code and no-code platforms?

istock-846843314.jpg
The difference between no-code and low-code platforms principally comes down to the approachability, ease of use and the level of technical knowledge that the user is assumed to require to have. With a no-code platform like Quick Base, a majority of our customers have no programming skills whatsoever, and they're able to use Quick Base to basically help burn down their backlogs, streamline workflows, and get their work done very quickly. Low-code platforms, on the other hand, also very useful and important, do assume some level of technical sophistication and technical skills in their users, and they're principally aiming at helping those IT developers get a very productive platform for them to be able to build and deliver projects quickly. No-code platforms in particular can help companies drive their digital transformation, by really providing the power of software to many more people in their organization. At Quick Base, what we found time and time again with our customers is that their IT and developer groups are working very hard on the big rock priorities within their organization, and what Quick Base can really help them do is move forward tons and tons of little rocks, little efforts that sort of stack up in a backlog of priorities that central IT


Is explainability enough? Why we need understandable AI

In order to create this human-centric ‘understandable’ AI, a person must be empowered through the AI to make a decision on the algorithmically ambiguous decisions. This means that the AI making the initial decisions about the veracity of a transaction has to also be built in a way that a human reviewing a specific issue can help resolve – without being a data scientist or “algorithmically literate”. Using a non-black box model, the data scientist can identify confidence parameters and inform the UI/UX designer what the nature of these parameters are. The UI/UX designer then creates an AI output that is descriptive rather than prescriptive. That is, it would clearly explain the confidence parameters and enable an end user to provide reasoning for a decision. Do we simply want machines to take over and make decisions for us? Likely not – we’ll instead want to take a more collaborative approach with machines where they augment us to make better decisions but allow us to manage inputs and set guidelines. Therefore, we need not only transparency and explainability but also, understanding.


The game-changing potential of smartphones that can smell

nose
How can a smartphone smell? According to KIT, "the electronic nose only is a few centimeters in size. The nose consists of a sensor chip equipped with nanowires made of tin dioxide on many individual sensors. The chip calculates specific signal patterns from the resistance changes of the individual sensors. These depend on the molecules in ambient air, differ for the different scents and, hence, are characteristic and recognizable. If a specific pattern has been taught to the chip before, the sensor can identify the scent within seconds. To start the process, the researchers use a light-emitting diode that is integrated in the sensor housing and irradiates the nanowires with UV light. As a result, the initially very high electrical resistance of tin dioxide decreases, such that changes of resistance caused by molecules responsible for the smell and attached to the tin dioxide surface can be detected." Clearly, this research has a long way to go before handset manufacturers will be open to including such a mechanism, but who nose? After all, in the ultra-tight packed innards of today's smartphone designs, "a few centimeters" is hardly trivial.


10 ways the enterprise could put blockchain to work

istock-849254008-1.jpg
The distributed ledger technology has the promise to make many operations more efficient and enable new business initiatives. But limitations in the technology itself as well as the business issues that arise with its implementation have curtailed mass adoption, said David Furlonger, vice president and fellow at Gartner. "All firms can do right now is experiment," Furlonger said. "They have to look at multiple offerings in the marketplace and understand the different governance models, data management architectures, security levels, and how it impacts their business." A large number of companies are now inquiring about the technology, said Martha Bennett, principal analyst at Forrester. "Many of the firms I speak with have projects going on, but not always with a firm view on whether to operationalize them," Bennett said. "There are a few highly ambitious projects under way, but these haven't gone live yet."Putting blockchain to work depends on the use case, Bennett said. "If there is a use case that calls for multi-party collaboration around shared trusted data, with or without an added element of automation, then that's worth pursuing if the existing system is error-prone, full of friction, or otherwise deficient," Bennett said.


How to protect physical infrastructure from cyberattacks

How do you know the unknown? So really, it's not really about identifying who these future or current threat actors might be, it's about understanding the types of attacks that we might be vulnerable to. The types of attacks that are emerging. We see for example evolutions in AI technology coming on very, very fast here. We realize, well, AI has the potential of being extremely good for our quality of life and the products that we build, but ultimately this technology is going to be turned against us. So as cyber professionals, it's our job to start to anticipate this technology and how these technologies are going to then be applied to the attack vectors, attacking our devices looking for openings, and how we can then build the fences against what we anticipate. It's very much a game of understanding our product, understanding the attacked surface, and building the fences for these types of attack vectors. ... . So when you start looking at the world of cybersecurity today and you look at the type of markets we're dealing with, some of the challenges come right down to geopolitical attacks as Andy was mentioning just a few seconds ago. 



Quote for the day:


"Intuition becomes increasingly valuable in the new information society precisely because there is so much data." -- John Naisbitt


Daily Tech Digest - June 06, 2018

Avoid these digital transformation false starts

Avoid these digital transformation false starts
Discussing “bi-modal IT” or “run the business vs. grow the business” may actually jeopardize your digital transformation. “By segregating legacy technologies from next-gen solutions, you are labeling one team as the past and another as the future,” says Lee. “When you identify one set of technologies — and one team — as something to get rid of, not only do you hurt the morale of half of your team, but you fail to innovate at the very foundation of your transformation. It’s not just customer-centric mobile apps that deserve innovation.” ... “There needs to be a collective agreement of whether your role is to transform the very foundations of your business or deliver new capabilities that the business will choose if and how to adopt,” says Lee. “You need to understand those expectations, especially when you are new to the company.” ... “The stakes of traditional IT disciplines: stability, security, software quality, and capacity are magnified during a digital transformation,” says Lee. “Those basics are only getting more important as your digital transformation takes technology closer to the customer and the core of what your company does.”



Blockchain’s Role in Securing Data Privacy

Blockchains make it easier to trace data, but they don’t enable you to control the flow of data once you’ve given away access. Since blockchains exist on a distributed network, there’s no central authority to stop someone from sharing the data you just shared. Blockchain doesn’t solve the issue of data leaks and re-sharing sensitive data. It only makes those leaks easier to trace. There’s still a need for privacy tools that encrypt and create access controls on blockchain data. A hybrid approach of a private, closed blockchain along with custom-developed privacy solutions may prove to be the best bet to gain the benefits of blockchain without losing the control of more centralised systems. draglet, a German blockchain development company, works in this field of private blockchain development, amongst other blockchain projects. With the right customisation, private blockchains could provide an easy to audit, low-risk way to store and manage customer data without sacrificing data controls. Additionally, smart contracts written on the blockchain could automate much of the access controls, sharing agreements, and data management tasks. 


Unpacking the event-driven microservices approach


When you use microservices with CEP outputs, you should think in terms of API managers, brokers and service buses. In this situation, microservices are invoked like service-oriented architecture or REST components in order for the developers to know the message formats needed, the nature of stateless or stateful processing and the way that microservices are sequenced along the workflows. Traditional programming practices that optimize the reuse of the smaller components can be effective here. Component reuse is an explicit benefit for microservices, and it has to be addressed in your design. Component reuse is facilitated by the fact that the sizing, in the functional terms of a microservice, used behind a CEP front end is almost totally controlled by the development team. Microservices with larger functional scopes aren't easily reused but are more efficient because there are fewer network paths to transit. Noncontextualized events are usually processed by microservices, lambdas or functional components. They are then orchestrated in an orderly way by a separate component; this is referred to as orchestration, workflow engine or step function.


Most businesses still struggling with mobile working and security

mobile working security
Fifty-three percent cited that one of their top three biggest problems with remote working is due to the complexity and management of the technology that employees need and use. Over half (54%) say that while their organisation’s mobile workers are willing to comply with requests relating to security measures, employees lack the necessary skills or technologies required to keep data safe. Nearly a third (29%) take the radical approach of physically blocking all removable media, and a further 22% ask employees not to use removable media although they have no technology to enforce this. “The number of organisations blocking removable media has increased compared with responses to the same question in 2017, when 18% said they were physically blocking all removable devices. A unilateral ban is not the solution and ignores the problem altogether whilst presenting a barrier to effective working. Instead, businesses should identify corporately approved, hardware encrypted devices that are only provided to staff with a justified business case. The approved devices should then be whitelisted on the IT infrastructure, blocking access to all non-approved media.” said Jon Fielding, Managing Director, EMEA, Apricorn.


What is TensorFlow? The machine learning library explained

What is TensorFlow? The deep learning library explained
The single biggest benefit TensorFlow provides for machine learning development is abstraction. Instead of dealing with the nitty-gritty details of implementing algorithms, or figuring out proper ways to hitch the output of one function to the input of another, the developer can focus on the overall logic of the application. TensorFlow takes care of the details behind the scenes. TensorFlow offers additional conveniences for developers who need to debug and gain introspection into TensorFlow apps. The eager execution mode lets you evaluate and modify each graph operation separately and transparently, instead of constructing the entire graph as a single opaque object and evaluating it all at once. The TensorBoard visualization suite lets you inspect and profile the way graphs run by way of an interactive, web-based dashboard. And of course TensorFlow gains many advantages from the backing of an A-list commercial outfit in Google. Google has not only fueled the rapid pace of development behind the project, but created many significant offerings around TensorFlow that make it easier to deploy and easier to use: the above-mentioned TPU silicon for accelerated performance in Google’s cloud


Now that everything can be tokenized, banks are taking notice

Now that everything can be tokenized, banks are taking notice
The key, according to Krauwer, is to further shape this future vision and learn how individuals can benefit from such a proposition. Even though the realization of a bank managing customers’ immaterial assets may still be far away, it’s not science fiction either, she states. “The past couple of years, individuals have become more and more empowered to get the most out of their assets,” Krauwer added. “Whether you make extra money by renting out your house through Airbnb or get free clothing by promoting a brand through your Instagram account, there’s no denying that a whole new economy has emerged with significantly lower entry barriers than in the pre-platform era.” ... One example is social network Earn.com, which allows its members to earn tokens whenever they respond to a message from a fellow community member. Yet, with an array of opportunities coming to the fore, a solution may be called upon that enables a person to get the most out of their assets with the least amount of friction involved that still maintains a person’s privacy.


Machine Learning in Finance – Present and Future Applications

Machine Learning in Finance - Present and Future Applications
Combine more accessible computing power, internet becoming more commonly used, and an increasing amount of valuable company data being stored online, and you have a “perfect storm” for data security risk. While previous financial fraud detection systems depended heavily on complex and robust sets of rules, modern fraud detection goes beyond following a checklist of risk factors – it actively learns and calibrates to new potential (or real) security threats. This is the place of machine learning in finance for fraud – but the same principles hold true for other data security problems. Using machine learning, systems can detect unique activities or behaviors (“anomalies”) and flag them for security teams. The challenge for these systems is to avoid false-positives – situations where “risks” are flagged that were never risks in the first place. Here at TechEmergence we’ve interviewed half a dozen fraud and security AI executives, all of whom seem convinced that given the incalculably high number of ways that security can be breached, genuinely “learning” systems will be a necessity in the five to ten years ahead.


Boards not asking right security questions


Harding said the second lesson relates to the fact that the most difficult decision throughout the cyber breach was deciding when to bring its customer-facing systems back online. “My question to the engineers was: What risks will we be taking if we put those systems back online? I realised that we could only go ahead when the cyber risk was lower than the business risk of being offline and that cyber risk needs to be a board decision,” she said. The third important lesson, said Harding, was that engineers really can communicate in English when they have to. “We learned that when engineers explain what they do in a way that non-technical people understand, that is when the magic really happens,” she said. In conclusion, Harding said it is extremely important that cyber security is not allowed to become a scary taboo. “We can’t make the digital world 100% safe, but we can make it civilised by building the necessary social, moral and legal scaffolding by having the right debates as a society to agree and set the rules of the road,” she said.


Embracing agile software methodologies to improve workflows

It is also important to keep a sustainable pace. While agility is the goal in these types of software development environments, the most efficient and effective method for creating software is keeping a realistic timetable. Having bouts of productivity is actually counter to the process. A steady, consistent pace is of the utmost importance. In order to keep pace, you will need to have consistent meetings with your team. Daily meetings, or scrum meetings, are the goal for software development teams in an agile environment. In these daily meetings software developers, engineers, and business people outside of the technical team go over all that has been accomplished and all that will be accomplished in the 24 hours to come. These daily meetings are structured around larger timetables. These time periods are punctuated with certain goals and usually last around 30 days. At the end of this 30 day period is when big reviews happen of the software product, major revisions are proposed, and timetables are revised.


Recruiting talent for digital cultures: Tips from McKinsey, Korn Ferry

When searching for digital talent, the wrong people are often the right people, said Swift, global leader for digital solutions at the global consulting firm Korn Ferry Hay Group. It's a conundrum her group takes pains to explain to clients seeking advice on creating digital cultures. "We actually do an exercise with executives where we have them list all the reasons they might not hire somebody," she said, citing as an example a common red flag -- the "jumpy resume." Prospective employees with a history of moving from job to job are often dismissed as bad bets, she said, but great digital candidates often do just that. "So, instead of saying, 'Well this person can't commit and they're flaky'," Korn Ferry asks clients to consider an alternative: i.e., "that this person is curious and adaptable, which are two of the traits in our research by the way that pop as being very predictive of success in digital talent," Swift said. But Swift warned it "takes a massive effort for some of these large organizations to say, 'OK, I'm not going to hire in my own image anymore" and practice what she calls reverse onboarding.



Quote for the day:


"It is amazing what you can accomplish if you do not care who gets the credit." -- Harry S. Truman


Daily Tech Digest - June 05, 2018

10 Open Source Security Tools You Should Know

(Image: Anemone123)
The people, products, technologies, and processes that keep businesses secure all come with a cost — sometimes quite hefty. That is just one of the reasons why so many security professionals spend at least some of their time working with open source security software. Indeed, whether for learning, experimenting, dealing with new or unique situations, or deploying on a production basis, security professionals have long looked at open source software as a valuable part of their toolkits.  However, as we all are aware, open source software does not map directly to free software; globally, open source software is a huge business. With companies of various sizes and types offering open source packages and bundles with support and customization, the argument for or against open source software often comes down to its capabilities and quality. For the tools in this slide show, software quality has been demonstrated by thousands of users who have downloaded and deployed them. The list is broken down, broadly, into categories of visibility, testing, forensics, and compliance. If you don't see your most valuable tool on the list, please add them in the comments.



The growing ties between networking roles and automation

Automation was expected to steal jobs and replace human intelligence. But as network automation use cases have matured, Kerravala said, employees and organizations increasingly see how automating menial network tasks can benefit productivity. To automate, however, network professionals need programming skills to determine the desired network output. They need to be able to tell the network what they want it to do. All of this brings me to an obvious term that's integral to automation and network programming: program, which means to input data into a machine to cause it to do a certain thing. Another definition says to program is "to provide a series of instructions." If someone wants to give effective instructions, a person must understand the purpose of the instructions being relayed. A person needs the foundation -- or the why of it all -- to get to the actual how. Regarding network automation, the why is to ultimately achieve network readiness for what the network needs to handle, whether that's new applications or more traffic, Cisco's Leary said.


5 ways location data is making the world a better place

A salesperson (L) talks with a visitor in front of a map showing the location of an apartment complex which is currently under construction at its showroom in Seoul March 18, 2015. While activity is soaring, with the number of transactions at a 7-year high, housing prices are rising at a glacial pace as heavy household debt and a fast-ageing population keep a lid on price growth. To match story SOUTHKOREA-ECONOMY/HOUSING Picture taken on March 18. REUTERS/Kim Hong-Ji
In the insurance sector, detailed data creates better predictions and more accurate customer quotes. Yet potential purchasers often don’t know the information needed for rigorous risk assessments, such as the distance of their house from water. Furthermore, lengthy and burdensome questionnaires can lose firms business; analysis from HubSpot found, by reducing form fields, customer conversions improve. PCA Predict uses its Location Intelligence platform to compile free data from the Land Registry and Ordinance Survey, including LiDAR height maps, as well as commercial address data, to determine accurate information on a potential customer’s property, such as distance from a river network, height, footprint, if the property is listed and its risk of wind damage. The model is also being developed to determine a building’s age using machine-learning and road layout. “We take disparate datasets and apply different types of analysis to extract easy-to-use attributes for insurers,” says Dr Ian Hopkinson, senior data scientist at GBG, the parent company of PCA.


Adoption of Augmented Analytics Tools Is Increasing Among Indian Organizations

Indian organizations are increasingly moving from traditional enterprise reporting to augmented analytics tools that accelerate data preparation and data cleansing, said Gartner, Inc. This change is set to positively impact the analytics and business intelligence (BI) software market in India in 2018. Gartner forecasts that analytics and BI software market revenue in India will reach US$304 million in 2018, an 18.1 percent increase year over year. ... "Indian organizations are shifting from traditional, tactical and tool-centric data and analytics projects to strategic, modern and architecture-centric data and analytics programs," said Ehtisham Zaidi, principal research analyst at Gartner. "The 'fast followers' are even looking to make heavy investments in advanced analytics solutions driven by artificial intelligence and machine learning, to reduce the time to market and accuracy of analytics offerings."


Apple’s Core ML 2 vs. Google’s ML Kit: What’s the difference?

core ml 2
A major difference between ML Kit and Core ML is support for both on-device and cloud APIs. Unlike Core ML, which can’t natively deploy models that require internet access, ML Kit leverages the power of Google Cloud Platform’s machine learning technology for “enhanced” accuracy. Google’s on-device image labeling service, for example, features about 400 labels, while the cloud-based version has more than 10,000. ML Kit offers a couple of easy-to-use APIs for basic use cases: text recognition, face detection, barcode scanning, image labeling, and landmark recognition. Google says that new APIs, including a smart reply API that supports in-app contextual messaging replies and an enhanced face detection API with high-density face contours, will arrive in late 2018. ML Kit doesn’t restrict developers to prebuilt machine learning models. Custom models trained with TensorFlow Lite, Google’s lightweight offline machine learning framework for mobile devices, can be deployed with ML Kit via the Firebase console, which serves them dynamically.


How to evaluate web authentication methods

user authentication
Two attributes I hadn’t give a lot of thought to are “requiring explicit consent” and “resilient to leaks from other verifiers.” The former ensures that a user’s authentication is not initiated without them knowing about it, and the latter is about preventing related authentication secrets from being used to deduce the original authentication credential. The authors evaluate all the covered authentication solutions across all attributes, and they include a nice matrix chart so you can see how each compared to the other. It’s a genius table that should have been created a long time ago. The authors rate each authentication option as satisfying, not satisfying or partially satisfying each attribute. The attributes aren’t ranked, but anyone could easily take the unweighted framework, add or delete attributes, and weight it with their own needed importance. For example, many authentication evaluators looking for real-world solutions will want to add cost (both initial and ongoing) and vendor product solutions. The author’s candid conclusions include: “A clear result of our exercise is that no [authentication] scheme we examined is perfect – or even comes close to perfect scores.”


Advanced Architecture for ASP.NET Core Web API


Before we dig into the architecture of our ASP.NET Core Web API solution, I want to discuss what I believe is a singlebenefit which makes .NET Core developers lives so much better; that is, DependencyInjection (DI). Now, I know you will say that we had DI in .NET Framework and ASP.NET solutions. I will agree, butthe DI we used in the past would be from third-party commercial providers or maybe open source libraries. They did a good job, butfor a good portion of .NET developers, there was a big learning curve, andall DI libraries had their uniqueway of handling things. Today with .NET Core, we have DI built right into the framework from the start. Moreover,it is quite simple to work with, andyou get it out of the box. The reason we need to use DI in our API is that it allows usto have the best experience decoupling our architecture layers and also to allowus to mock the data layer, or have multiple data sources built for our API. To use the .NET Core DI framework, justmake sure your project references the Microsoft.AspNetCore.AllNuGet package (which contains a dependency on Microsoft.Extnesions.


Intuitively Understanding Convolutions for Deep Learning


The advent of powerful and versatile deep learning frameworks in recent years has made it possible to implement convolution layers into a deep learning model an extremely simple task, often achievable in a single line of code. However, understanding convolutions, especially for the first time can often feel a bit unnerving, with terms like kernels, filters, channels and so on all stacked onto each other. Yet, convolutions as a concept are fascinatingly powerful and highly extensible, and in this post, we’ll break down the mechanics of the convolution operation, step-by-step, relate it to the standard fully connected network, and explore just how they build up a strong visual hierarchy, making them powerful feature extractors for images. The 2D convolution is a fairly simple operation at heart: you start with a kernel, which is simply a small matrix of weights. This kernel “slides” over the 2D input data, performing an elementwise multiplication with the part of the input it is currently on, and then summing up the results into a single output pixel.


Windows Server 2019 embraces SDN

windows server 2019
The new virtual networking peering functionality in Windows Server 2019 allows enterprises to peer their own virtual networks in the same cloud region through the backbone network. This provides the ability for virtual networks to appear as a single network. Fundamental stretched networks have been around for years and have provided organizations the ability to put server, application and database nodes in different sites. However, the challenge has always been the IP addressing of the nodes in opposing sites. When there are only two static sites in a traditional wide area network, the IP scheme was relatively static. You knew the subnet and addressing of Site A and Site B. However, in the public cloud and multi-cloud world – where your target devices may actually shift between racks, cages, datacenters, regions or even hosting providers – having addresses that may change based on failover, maintenance, elasticity changes, or network changes creates a problem. Network administrators have already spent and will drastically increase the amount of time they spend addressing, readdressing, updating device tables, etc to keep up with the dynamic movement of systems.


Managing a hybrid cloud computing environment


Ensuring the security of physical edge networking connections and the connectivity of all communication is equally essential. This requires redundant networking components that utilize built-in failover capabilities. Finally, careful selection of the power infrastructure is vital to supporting all elements of edge computing. The ability to maintain power at all times via the use of backup power and integration of the remote monitoring of the power infrastructure into the customer’s management system are paramount. You can do this by seeking UPSs, rackmount power distribution units (PDUs) and power management software with remote capabilities. Being able to remotely reboot UPSs or PDUs can be extremely helpful in edge applications. In addition, solutions like Eaton’s Intelligent Power Manager software can enhance your disaster avoidance plan by allowing you to set power management alerts, configurations and action policies. By creating action policies for remediation, Eaton enables you to automate server power capping, load shedding and/or virtual machine migration should problems occur.



Quote for the day:


"Don't be buffaloed by experts and elites. Experts often possess more data than judgement." -- Colin Powell


Daily Tech Digest - June 03, 2018

How financial institutions can start with artificial intelligence

Technology 1
AI is increasingly becoming the way for leading financial services to provide everything from customer service to investment advice, says PwC’s Mike Quindazzi. Yet, few banking industry CEOs are considering the impact of AI on future skills, despite the impact that AI is already having on trading desks and reshaping customer interactions. “Protecting the base and avoiding risks is clear and present in the minds of banking leaders,” says Quindazzi. “Many challenges persist due to bias, privacy, trust, lack of trained staff, and regulatory concerns. In the near term, ‘augmented intelligence’ solutions, in which machines assist humans, are quickly making their way into operation.” 'AI or die!' seems to be the rallying cry at every banking conference these days, according to Bradley Leimer, of Explorer Advisory and Capital. “But before going down the path of building and implementing solutions leveraging AI and similar tools, financial institutions must ask themselves where they’re falling short in regard to providing their customers true lifetime value around their finances,” Leimer adds.



Maintaining Malaysia’s digital transformation trajectory

Digital transformation - or DX, as IDC calls it - should be placed as the core strategy and organisations should accelerate the DX pace to thrive in the competitive digital ecosystem. IDC Malaysia's FutureScape 2018 predictions primarily focus on the four pillar technology areas; Cloud, Mobility, Social and Big Data and analytics as well as six innovation accelerators; Augmented and Virtual Reality (AR/VR), Cognitive/AI System, Next-Gen Security, Internet of Things (IoT), 3D Printing and Robotics. Some of IDC's expectations are eye-opening: By 2021, at least 20 percent of Malaysia GDP will be digitised - with growth in every industry driven by digitally-enhanced offerings, operations and relationships; by 2020, investors will use platform/ecosystem, data value, and customer engagement metrics as valuation factors for all enterprises. ... By getting the private sector to partner in funding #MYCYBERSALE 2017 with continued support by MDEC, PIKOM was able to reduce government funding for the project by 40 percent while increasing the Gross Merchandise Value (GMV) or sales generated through the online sales by 55 percent. 


Innovative companies think differently about people

Modern businesses are facing new problems that need fresh thinking. Hiring the same people as before isn’t going to cut it. The variety of perspectives in diverse teams deliver better products, services and customer experiences – and obviously that’s good for business. The ABC recently launched Employable Me, a show about a group of jobseekers aiming to prove that having a neurological condition shouldn’t make them unemployable. It goes to the heart of this need to explore new talent pools. The unemployment rate for people on the autism spectrum was above 30 per cent in 2015, more than three times the rate for people with disability and almost six times the general population. Yet people with these disorders are often highly intelligent. Some have great attention to detail or an intense commitment to delivering high-quality work. They tend to be lateral thinkers and have immeasurable value to offer.


What frustrates Data Scientists in Machine Learning projects?


There is an explosion of interest in data science today. One just needs to insert the tag-line ‘Powered-by-AI’, and anything sells. But, thats where the problems begin. Data science sales pitches often promise the moon. Then, clients raise the expectations a notch up and launch their moonshot projects. Ultimately, it’s left to the data scientists to take clients to the moon, or leave them marooned. An earlier article, ‘4 Ways to fail a Data scientist Job interview’ looked at the key blunders candidates commit in the pursuit of data science. Here, we wade into the fantasy world of expectations from data science projects, and find out the top misconceptions held by clients. Here we’ll talk about the 8 most common myths I’ve seen in machine learning projects, and why they annoy data scientists. If you’re getting into data science, or are already mainstream, these are potential grenades that might be hurled at you. Hence, it would be handy knowing how to handle them.


The blockchain explained for non-engineers

glockblockchain.jpg
Blockchain buzz is inescapable. And while the technology has transformed some companies and minted fresh millionaires in a dazzlingly short period of time, blockchain is as confounding as it is powerful. If you're confused by the hype, you're not alone. The blockchain is a decentralized, vettable, and secure technology that has, in less than a decade, become a powerful driver of digital transformation poised to help create a new employment economy. Evangelists claim blockchain tech will disrupt industrial supply chains, streamline real estate transactions, and even redefine the media industry. "Think of blockchain as the next layer of the internet," said Tom Bollich, CTO of MadHive. "HTTP gave us websites ... now we have blockchain, which is like a new layer of computing." Employment data seems to validate blockchain's current hype cycle. Google search data indicates a cresting wave of interest in the tech, and according to Indeed.com searches for blockchain-related jobs spiked nearly 1000 percent since 2015. Enterprise organizations like Capital One, Deloitte, ESPN, and eBay are hiring blockchain engineers, retraining project managers to facilitate integrations, and even searching for specialized attorneys.


Unusual Breach Report by Humana Shines Light on Fraud Prevention

In a statement provided to Information Security Media Group, a Humana spokeswoman says the company's initial analyses, and its continuous, ongoing monitoring activities, indicate that fewer than 200 members were impacted in the incident. "The abnormal activity was first identified as an anomaly in our interactive voice response reporting tools. It was noted that an abnormally high abandon rate was being observed from a small number of telephone exchanges," she says. "All evidence in this particular incident indicates that the abnormal activity was benign." Report to State Ryan Kriger, Vermont's assistant attorney general, tells ISMG that Humana reported to the state that 11 Vermont residents were affected by the recent incident. He adds that it's not clear if the incident reported by Humana involving callers who might have been trying to confirm the personally identifiable information of other individuals qualifies as a data breach.


Network security has become irrelevant: Zscaler CEO

Most of the thousands of security companies today sell on the fear of uncertainty. There is so much noise that it is very hard to figure out who to choose. I envisioned the digital transformation taking place in the enterprise, and how it would disrupt traditional network and security models. I asked myself simple questions before starting Zscaler. The world was changing, and employees were beginning to go mobile. More and more applications were becoming SaaS-enabled. I saw a lot of cloud based businesses such as Salesforce, and I figured that security could also be done in the cloud. That’s when I decided to create a security platform where companies can comfortably and securely access SaaS applications, without having to worry buying, deploying and managing. The differentiating factor for us is that we are not looked at as a security product. We are an enabler of business because companies want agility in today’s environment. The idea is to enable businesses to do things better and in a secure manner. Our technology solution is designed to provide security across the cloud stack.


Why a Coffee Shop Will Probably Be Your Workspace Within 10 Years


A study by CTrip of 500 volunteers found that individuals who worked from home were 13.5 percent more efficient and 9 percent more engaged than their peers working in the office. They also took shorter breaks and sick days and took less time off, and attrition rates were 50 percent better. Job satisfaction was higher overall, too. Another study by TINYpulse had similar positive results. Subsequently, more and more companies--particularly those in the transportion, computer, information systems and mathematics industries--are giving workers the leeway to work outside the standard cubicle. These companies don't particularly care where workers work, so long as they finish the jobs they're assigned on time with the expected quality level. In fact, they're using flexible work options to attract new hires, particularly millennials. I should point out here that, in the CTrip study, many workers eventually went back to the office when given the opportunity. Workers want flexibility, but they also wanted to get away from being so isolated and to combat the accurate perception that they wouldn't be considered prominently for bonuses and promotion. 


Parallel programming no longer needs to be an insurmountable obstacle

Parallel code gets its speed benefit from using multiple threads instead of the single one that sequential code uses. Deciding how many threads to create can be a tricky question because more threads don't always result in faster code: if you use too many threads the performance of your code might actually go down. There are a couple of rules that will tell you what number of threads to choose. This depends mostly on the kind of operation that you want to perform and the number of available cores. Computation intensive operations should use a number of threads lower than or equal to the number of cores, while IO intensive operations like copying files have no use for the CPU and can therefore use a higher number of threads. The code doesn’t know which case is applicable unless you tell it what to do. Otherwise, it will default to a number of threads equal to the number of cores.


The Hybrid Cloud Habit You Need to Break

Plenty of small companies start off with a data center in the basement. A few years and a couple satellite offices later, the company decides to move some applications onto a private cloud to accommodate the geography of its workforce. A few years after that, it moves other applications to a public cloud service to stay ahead of traffic surges, lower costs, and add agility. At each stage, the network administrator establishes security protocols for the new environment based on the new architecture. But many network administrators never go back and adjust the data center’s security in light of the new private cloud, and the protocols are seldom adjusted when the second cloud is added. There are lots of reasons for this. Budget plays a role. A planned cloud adoption might have a budget for security that only factors in the new environment. Or the administrator might believe that, having checked for hardware and policy compatibility between the new environments, the security policies are aligned, and there’s no need and no time to go back.



Quote for the day:


"Confidence comes not from always being right but from not fearing to be wrong." -- Peter T. McIntyre


Daily Tech Digest - June 02, 2018

AI application in CX
For a simple, isolated interaction, AI is able to deliver results by simply knowing that an email is an email and a campaign is a campaign. Our web analytics and CRM platforms take advantage of this inherent luxury. But in holistic, cross-channel journey analytics, the idea that touchpoints of a similar category will be the same across enterprises is an antiquated notion. Customer journeys are as unique to individual businesses as fingerprints. Every company has their own set of touchpoints and a distinct method for employing those engagements in their customer experience. For AI to deliver value, it must be given some context. By context, I mean more than simply designating a certain interaction as an “inbound call” and another as “order fulfillment.” AI must know the significance of these events in shaping a customer behavior. That requires an awareness of both the journey that these touchpoints helped to shape and the KPIs which were subsequently impacted by that customer behavior—whether related to revenue, profitability, customer lifetime value, customer satisfaction or other factors driving high-level business performance.



The real issue here isn't the level of spending--it's the underlying philosophies and organizational cultures driving (and determining) the tech spending levels. In a recent blog post, Chris Skinner wrote about the excuses smaller banks give explaining their resistance to technology ... "If you don’t think you can change a teeny-weeny bank, then what the hell are you doing there? Massive banks are changing and they’ve got 1000x the challenges you have. Most of the barriers to small financial firms seizing the digital opportunities are created by negative thinking. But then I have to say that most small financial firms I’ve met are ultimately constrained by the negative thinking of their CEO. This is because many small financial firms are led by a CEO who was anointed ages ago. They got the job, and they’ve been there for years. They’re not really a CEO to be honest, but just a caretaker for the next guy." I agree with the "massive banks are changing" point, but not the rest of the quote. I know (and work with) a lot of mid-size bank and credit union CEOs who have been in their role for a while


New European Union Data Law GDPR Impacts Are Felt By Largest Companies


These regulations are impacting companies globally, not just European firms. Forbes reported in December 2017, GDPR will affect US-based businesses as well – even those without clients or operations in the European Union. Indeed, as Oliver Smith reported earlier this month, GDPR has cost US-based companies nearly $7.8 billion in compliance to avoid the multi-million dollar fines and penalties.  ... These numbers are astronomical, and for data-based entrepreneurial startups, prohibitive. As CNN’s Ivana Kottasová reports “The cost of complying with the new law has already forced an online game producer, a small social network, and a mobile marketing firm to close key businesses or shut down entirely.” This regulation will greatly impact data-driven businesses in Europe and across the Globe. The 28-state European Union is the world’s second-largest economy, an economy that companies with a digital presence can’t help but interact with. These are the companies that capture our interest as 30 Under 30 observers. As these startup founders grapple with the implications of the GDPR, many hesitate moving forward until they 1. understand, and 2. can comply with these regulations in a cost-effective way.


Continuous Development Will Change Organizations as Much as Agile Did

Today, leading companies are embracing a new business process methodology. Once again, it has started in the bowels of technology companies and startups. And, once again, business leaders would do well to pay close attention to the strategic implications. The methodology is Continuous Development, which, like agile, began as a software development methodology. Rather than improving software in one large batch, updates are made continuously, piece-by-piece, enabling software code to be delivered to customers as soon as it is completed and tested. Companies that can successfully implement Continuous Development throughout their organization will find dramatic strategic benefits ... Continuous Development is a growing trend in the software industry. And for good reason: it represents a more effective method for software development in order to achieve both external and internal objectives. Various estimates and surveys suggest that as many as 20% of software professionals are using some form of it. Business executives at companies large and small would be wise to embrace this new methodology and even push their organizations to adopt this more flexible, powerful technique to develop technology products.


Achieving Intelligent Automation – Leveraging IoT Data from Automated Systems

Achieving Intelligent Automation - Leveraging IoT Data from Automated Systems
In the future, Brendan believes that we will see more city-level optimization of logistics and transport networks. Aside from the obvious energy saving benefits of AI in such applications, Brendan says that there could be benefits which are harder to perceive at first glance. For example, HVAC systems in buildings typically have have hundreds of different sensors gathering data. Sensors can record the air flow and air temperature data from vents connected to outside environment. Brendan goes on to say that this is but a step towards more complete autonomy through AI. He adds that toay AI can identify anomalies; two years from now, AI will likely be able to identify whether an anomaly is a critical problem or not (again from historical evidence). Analogous to how autonomous vehicles are now removing humans from the loop, he sees AI platforms aimed at leveraging IoT data develop similar capabilities in the future. Looking five years ahead into the future, Brendan has the following predictions about how AI applications for automated systems might evolve


Banking Playing Catch Up in Technology, Conceding Battle for Payments


As the banking industry continues to move more transactions to digital channels and adjusts the technology used in back-office operations, costs are being reduced, productivity is increasing and response to risk and compliance needs are improving. As a result, and for the first time in its five-year history, the annual Economist Intelligence Unit survey on the future of retail banking, conducted for Temenos, shows that global bank executives are now more concerned with technology-driven trends than they are by regulation. About 58% of respondents in the survey said “changing customer behavior and demands” will have the biggest impact on retail banks in the years till 2020, citing a survey of 400 senior banking executives across the globe. In addition, “technology and digital” (48%) are now bigger trends than “regulatory fines and recompense orders” (43%). This trend is not true in North America, where regulatory fines and penalties are still the primary concern for large banks (56%), compared to just 34% who said the same about new technologies such as artificial intelligence and blockchain (14% lower than global results).


How to avoid the coming cloud complexity crisis

How to avoid the coming cloud complexity crisis
Create a complexity management plan. This means taking a few steps back and understanding your own issues before you start throwing processes, technology, and a lot of cash at the problems. In this plan, you need to define your approach to dealing with traditional and cloud-driven complexity, how systems will be tracked, how you’ll minimize complexity going forward, and the use of technology to assist you. Select tools needed to managed complexity. This is a Pandora’s box, because everyone has an idea of what tools will be helpful. In my work, I end up in a lot of emotional discussions around something that should be very logical. You need to pick tools that provide the following capabilities: configuration management, devops automation, hybrid monitoring and management tools, and cloud-specific tools such as cloud services brokers (CSBs) or cloud management platforms (CMPs). Set up processes. This means taking the time to figure out core processing of tracking cloud and traditional resources, of services bound to those resources, and data that exists around those resources. How do you add and/or remove resources? Who does it? And what tools do you use?


How Capital One sees digital identity as a business opportunity

“It fits with Capital One’s strategy, not just from a digital identity services perspective but with the broader platform business model that Capital One has used to expand its set of services beyond just a narrow set of financial services,” Shevlin said. “Capital One is positioning itself to become the Amazon of banking more than any other big bank is.” Nash was the director of identity services at Google as well as the senior director of consumer identity at PayPal. He announced in a blog post that Confyrm, which he founded in 2013, would be sold to Capital One. “We were eager to scale our efforts to help restore trust in digital identities, and we were fortunate to find a partner in Capital One, who shared our vision and commitment to improving consumer identity protection,” Nash wrote in the post. In response to inquiries from American Banker, Capital One provided a link to Nash's blog post. The price of the Confyrm deal was not disclosed. Capital One is not the only bank working with APIs to help protect customers online.


Demystifying black box AI


In Hong Kong, the finance and insurance industries are probably most likely to be affected by the black box AI problem according to Chun. These industries are increasingly using machine learning in fraud detection, investment advice, portfolio management, algorithmic trading, and loan or insurance underwriting. Bias might be resulted from machine learning or AI systems. Machine learning learns from data, so it’s going to replicate any biases in the data set. “If the data models themselves contain biases then the results from AI machine learning will potentially also be biased,” he said. “The black box AI phenomenon is particularly problematic for consumer facing applications,” Chun added. “For example, if a loan or insurance policy got rejected because of AI recommendations, the consumer would want to know why.” In these situations, humans have to involve in reviewing the AI algorithm or offering explanations to the consumer. Echoing the same sentiment, Samson Tai, IBM Hong Kong’s distinguished engineer and CTO said, “Biases are arised mainly due to problems in data processes rather than training algorithms. It’s important to be aware of the issues of biases in data sets.”


Don't Neglect Physical Security of 'Workstations'

Regulator: Don't Neglect Physical Security of 'Workstations'
A May 30 cybersecurity alert issued by the Department of Health and Human Services' Office for Civil Rights urges HIPAA covered entities and BAs to pay closer attention to providing good physical security for "workstations," which include a wide variety of devices. In its monthly newsletter alert for May, OCR notes that while the HIPAA Security Rule specifically references "workstations," the term is defined in the HIPAA rule as "a computing device, for example a laptop or desktop computer, or any other device that performs similar functions - and electronic media - stored in its immediate environment. Portable electronic devices ... included in this definition ... could include tablets, smart phones and similar portable electronic devices." Physical security is an important component of the HIPAA Security Rule that is often overlooked, OCR writes. "What constitutes appropriate physical security controls will depend on each organization and its risk analysis and risk management process."



Quote for the day:


"You're making the biggest mistake of all when you think your title means you can't be mistaken." -- @LeadToday