Daily Tech Digest - June 22, 2018

Oracle now requires a subscription to use Java SE
Oracle has revamped its commercial support program for Java SE (Standard Edition), opting for a subscription model instead of one that has had businesses paying for a one-time perpetual license plus an annual support fee. The subscriptions will be available in July 2018. (Personal, noncommercial usage continues to be free and not require a subsctiptoion.) Called Java SE Subscription, the new program for mission-critical Java deployments provides commercial licensing, with features offered such as the Advanced Java Management Console. Also, Oracle Premier Support is included for current and previous Java SE releases. It is required for Java SE 8, and includes support for Java SE 7.  ... The price is $25 per month per processor for servers and cloud instances, with volume discounts available. For PCs, the price starts at $2.50 per month per user, again with volume discounts. One-, two-, and three-year subscriptions are available. Oracle has published the terms of its new Java SE Subscription plans. The previous pricing for the Java SE Advanced program cost $5,000 for a license for each server processor plus a $1,100 annual support fee per server processor, as well as $110 one-time license fee per named user and a $22 annual support fee per named user


Making intelligence intelligible with Dr. Rich Caruana

Sometimes, it’s just a black box because it’s protected by IP. So, many people will have heard of this model that is used for recidivism predictions. So, this model was created by a company, and the model is a pay-for-use model. And the model is just not something that’s known to us, because we’re not allowed to know. By law, it’s something the company owns, and the courts have, several times, upheld the right of the company to keep this model private. So maybe you’re a person who this model has just predicted you’re a high-risk of committing another crime and because of that, maybe you’re not going to get parole. And you might say, “Hey, I think I have a right to know why this model predicts that I’m high-risk.” And so far, the courts have upheld the right of the company that created the model to keep the model private and not to tell you in detail why you’re being predicted as high or low risk. Now, there are good reasons for this. You don’t necessarily want people to be able to game the model. And in other cases, you really want to protect the company who went to the expense and risk of generating this model. But that’s a very complex question.


A QA team finds continuous testing benefits worth the effort


Continuous integration was born around the idea that the earlier you find a bug, the cheaper it is to fix. But this priority could become problematic if there is not an easy, fast and reliable way to assess whether changes are ready to be integrated and then ready to go to production. When you adopt continuous testing as a key practice, your code must always be ready for integration, according to Isabel Vilacides, quality engineering manager at CloudBees. "Tests are run during development and on a pull request basis," she explained. "Once it's integrated, it's ready to be delivered to customers." Continuous testing doesn't stop at functional testing; it involves considering nonfunctional aspects, such as performance or security. The process aims to prevent bugs through code analysis, before risks become apparent in production. Continuous testing requires cohesive teams, where quality is everyone's responsibility, instead of separate teams for development, testing and release. The approach also makes automation a priority and shifts quality to the left, making it an earlier step in the pipeline. 


CISO soft skills in demand as position evolves into leadership role

In the old days, the CISO, I was told, was just an advisory position. Now, my roles, the roles I've held in the last seven years or so, are much more than advisory. Advisory is part of it for sure, but there's a lot more leadership involved. I see it becoming more and more a position reporting directly to the CEO, a truly C-level position. I see CISOs have vice presidents reporting to them going forward. And I think my job as being increasingly described as chief ethicist, asking: What's the right thing to do, and not just what's the most secure thing to do? What's the proper behavior? What do customers expect from us? If a compromise has to be made, what's the most ethical compromise to make? ... It's important for at least two different reasons. One, from a practical perspective, I've talked a lot about the skills gap. If we're blocking 50% of the planet from joining this career path, we're really contributing to our biggest challenge. Then the other part: Women across the globe are economically oppressed, and information security is a lucrative field. I want to get women into the information security field so they can be financially independent and make a good living.


It’s not easy to move from a private cloud to a hybrid cloud

It's not easy to move from a private cloud to a hybrid cloud
Sadly, the move from a private cloud to a public cloud is not easy, whether you go hybrid or all-public. The main reason is that there is no direct mapping from private cloud services, which are the basics (storage, compute, identity access management, and database) to public cloud services which have those basic services plus thousands of other higher-end services. Private clouds today are where public clouds were in 2010. Public clouds today are in 2018. You’re in essence migrating over a ten-year technology advance as you move your applications between private and public. Complexity also comes in when you’ve already coupled your applications to the services in the private cloud, which is typically going to be OpenStack. There are very few OpenStack deployments on public clouds, none of which are the Big Three providers (Amazon Web Services, Google Cloud Platform, and Microsoft Azure). That means you can’t do an A-to-A mapping of the cloud services from your private cloud to the public clouds. And that in turn means you need to remap these services to similar services on the public cloud.


The rise of active defense in cybersecurity strategies

As in any game against an adversary, you need both defensive and offensive strategies. An active defense adds the offense-driven actions so that organizations can proactively detect and derail would-be attackers before they have time to get comfortable within the network, stopping attacks early and gathering the threat intelligence required to understand the attack and prevent a similar recurrence. Sometimes active defense includes striking back at an attacker, but this is reserved for military and law enforcement that have the resources and authority to confirm attribution and take appropriate action. An active defense strategy changes the playbook for cybersecurity professionals by combining early detection, substantiated alerts and information sharing to improve incident response and fortify defenses. It is no longer “a nice to have,” but instead is becoming more widely accepted as a “must have” as prevention-only tactics are no longer enough. With well-orchestrated breaches continuously making headlines, an active defense strategy is becoming a priority. 


This new Windows malware wants to add your PC to a botnet - or worse

The malware comes equipped with three different layers of evasion techniques which have been described by the researchers at Deep Instinct who uncovered the malware as complex, rare and "never seen in the wild before". Dubbed Mylobot after a researcher's pet dog, the origins of the malware and its delivery method are currently unknown, but it appears to have a connection to Locky ransomware -- one of the most prolific forms of malware during last year. The sophisticated nature of the botnet suggests that those behind it aren't amateurs, with Mylobot incorporating various techniques to avoid detection. They include anti-sandboxing, anti-debugging, encrypted files and reflective EXE, which is the ability to execute EXE files directly from memory without having them on the disk. The technique is not common and was only uncovered in 2016, and makes the malware ever harder to detect and trace. On top of this, Mylobot incorporates a delaying mechanism which waits for two weeks before making contact with the attacker's command and control servers -- another means of avoiding detection.


Plan Now For Your Migration To Windoes Server 2019

Plan now for your migration to Windows Server 2019
Web applications running on IIS are easy to test because most code is just HTML, .Net or other Web app that runs on top of the IIS/Web platform. Setting up a Windows Server 2019 server with IIS and then uploading Web code to the server is a quick-and-easy way to confirm that the Web app works and can easily be the first 2019 server added to an environment. Fileservers are also good early targets for migrating old to new. Many times, fileservers have gigabytes or even terabytes of data to copy across, and fileservers are also the things that may not have been upgraded recently.  In early-adopter environments, many times the old fileservers are still running Windows Server 2008 (which goes end-of-life in the summer of 2019) and could use an upgrade. File migration tools like Robocopy or a drag-and-drop between Windows Explorer windows can retain tree and file structures as well as retain access permissions as content is copied between servers. Tip: After content is copied across, new servers can be renamed with the old server name, thus minimizing interruption of user access.


Strategies for Decomposing a System into Microservices

Sometimes you will find that they have different mental models for the same business concepts or use the same terms to describe different concepts and if so, it’s an indication that these concepts belong to different bounded contexts. From the beginning Khononov and his team used these discovered boundaries to define services, with each boundary becoming a service. He notes though that these services represent quite wide business areas, sometimes resulting in a bounded context covering multiple business subdomains. As their next step, they instead used these subdomains as boundaries and created one service for each business subdomain. In Khononov’s experience, having a one-to-one relationship between a subdomain and a service is a quite common approach in the DDD community, but they didn’t settle for this, instead they continued and strived for even smaller services. Looking deeper into the subdomains, they found business entities and processes and extracted these into their own services. From the beginning this final approach failed miserably, but Khononov points out that in later projects it has been more successful.


Why you should train your staff to think securely

Far too often, information security teams have only the broadest overview of the wider workings of their organisations. Other staff, meanwhile, tend to have little knowledge of or interest in information security practices, which they often believe have been designed to hinder their day-to-day work. However, when any employee with Internet access can jeopardise the entire organisation with a single mouse-click, it should be clear that the responsibility for information security lies with every member of staff and that security practices need to be embedded in the working practices of the whole business. Insider attacks are not limited to the malicious actions of rogue staff. The term also refers to the unwitting behaviour of improperly trained employees, or to the exploitation of inappropriately applied privileges and poor password practices by malicious outsiders. Staff need regular training on information security practices to ensure they’re aware of the risks they face on a daily basis. The vast majority of malware is spread by drive-by downloads and phishing campaigns, both of which exploit human error.



Quote for the day:


"Trust is one of the greatest gifts that can be given and we should take creat care not to abuse it." --Gordon Tredgold


Daily Tech Digest - June 21, 2018

AI is Powering the Growing Emotional Intelligence Business


Understanding emotions from text requires a lot of processing power. EMRAYS turned to GPUs from NVIDIA to power deep learning in order to rank, list and search web pages according to their emotional content. “We need that acceleration to keep up with the complexities of human emotion,” Tero stated. Mojeek users can now search the web and select results for emotions including love, laughter, surprise, anger or sadness. In order to focus on the reader’s emotional reaction, the EMRAYS’ search tool does not just count the number of positive or negative words in a text, or parse the tone of the writer. Instead, they listen to millions of reactions on social media each day. They use this data to train artificial neural networks. The networks learn to predict what kind of emotional reaction a piece of written content might prompt in a human reader. EMRAYS says it has analyzed over one billion emotional data points. In this way, social media platforms Facebook and Twitter produce a volume of information each day that provides EMRAYS with training data, to help improve the accuracy of its neural network.


jun18_13_950362488
RPA involves the codification of business rules, but in many cases, business rules haven’t been examined for many years and don’t make sense in the current environment. In addition, existing business rules are sometimes described as requiring judgment, but in actuality they can be turned into more-accurate and more-consistent algorithms for better, more-consistent decision making. At a leading global logistics company, for example, the business rules in a claims process suggested that decisions on tolerances for refunds required the judgment of a manager. But when interviewed, the manager was able to clearly articulate rules of thumb he followed, which could be defined as rules for the RPA process to follow. In many companies, the level of process knowledge and understanding is quite low. The company may have collections of standard operating procedures, but they are often poorly documented and out of date. Each employee typically follows their understanding of best practices. By working with high-performing employees to challenge and improve the process and embed this into RPA, we have seen not only significant improvements in the processes being automated but also reduced process problems across other parts of the business.


Is Blockchain a Universal Platform?


It would be completely ridiculous to suggest that the insurance industry is an emerging market – in fact, it is the largest market in the world with staggering 1.2 trillion dollars in revenue. Despite this position the market it is in, insurance is caught in a slog deeply rooted in traditional practices. Blockchain can be used to create sub-markets within the industry: Peer-to-peer insurance, which cuts out the middlemen and provides greater portions of premiums to the policy holder; Parametric insurance, which uses a smart contract to automatically pay twenty percent of any type of claim; and Microinsurance, which can be used for international donations which need to be monitored every step of the way from payment to reception. These minor markets provide competition within the industry that will boost consumer empowerment and overall growth. ... Blockchain improves HIPAA drastically by the exact same system that empowers the real estate market. Middlemen can gain access to the information through a blockchain ledger but cannot access the information in any way in order to alter it. That privilege is reserved only for the prescriber, the pharmacist, and you.


NHS trust dumps creaking EMC SAN for Nutanix hyper-converged


“We got to the point where we had to ask, ‘Do we throw loads of money into this and keep expanding it, buying shelves and drives?’ In the end we decided to see what was out there,” said Sawyer. His team considered a setup from NetApp in the traditional three-tier architecture. “They wanted to come along with a pre-configured cabinet, but we simply didn’t have space.” The trust eventually plumped for Nutanix hyper-converged infrastructure and now has 400 virtual machines running on 24 nodes. Did Sawyer have any worries about opting for what was a new alternative to traditional IT architectures? “Yes, we had concerns. It was something completely new but with some research we felt we knew where we were going,” he said. “The attraction was that we could easily add to it and not create bottlenecks. We had been able to add to the SAN but that created a bottleneck between servers and storage. We decided to take the risk and go down the hyper-converged route.” Key benefits of the Nutanix deployment for Sawyer have been space saved in the trust’s server rooms and lack of disruption during patching and upgrades.


Linux and open-source jobs are in more demand than ever

"Open source technology talent is in high demand, as Linux and other open source software dominates software development," said Linux Foundation's executive director, Jim Zemlin, in a statement. "I am encouraged that that companies are recognizing more and more each day that open-source technology is the way to advance their businesses. The Linux Foundation, our members, and the open source ecosystem are focused on ensuring training and certification opportunities are highly accessible to everyone who wants to seek them out." I see this myself. I go to almost a dozen technology trade shows a year. And, whether they're open-source based, Open Source Summit or not, or CES, one thing is always true in the past few years: Companies aren't just hungry, they're starving for open-source and Linux-savvy workers. At some shows, especially ones about cloud technologies, such as OpenStack Summit, where literally every company was looking for employees with open-source skills. This latest survey found that Linux skills are back on top as the most sought after skill, with 80 percent of hiring managers looking for tech professionals with Linux expertise.


AI in The Contact Centre: 5 Ways Bots Can Boost Business Performance

AI Bots Contact Centre
It’s hard to ignore the impact that artificial intelligence and other disruptive technologies are having on the contact centre. In the quest to support an ever-more-demanding customer base, today’s contact centres are being forced to embrace newer, more complex solutions for end-to-end client support. Already, AI has helped us to develop chatbots to supplement the omnichannelcontact centre experience and speech analytics that assist agents in understanding the sentiment behind conversations. While some brands remain concerned about the rise of artificial intelligence – fearing that bots will eventually mean the end of service agent jobs, most experts agree that the robot revolution might not be a bad thing. In fact, artificial intelligence could be just the thing that today’s contact centre needs to serve their unpredictable audience. ... Big data and AI go together perfectly. In a world where contact centres are gathering more useful customer information than ever before, AI tools give businesses a way to collect and transform that data into actionable insights for the call centre.


Can you teach an old app new tricks?

null
Refactoring and rewriting are the two main strategies used to migrate traditional applications to the cloud but there are other ways to modernise your apps without taking such a taxing route. Application modernisation must be approached holistically, beginning with the infrastructure, progressing to development processes, and then addressing application architecture. The fastest, most cost-effective path to application modernisation is an incremental approach that empowers teams to tackle seemingly insurmountable challenges one step at a time. Skytap Cloud is the only public cloud specifically designed to migrate and modernise traditional enterprise applications. Our Infrastructure, Process, Architecture (IPA) approach offers an incremental route to modernisation that is both realistic and achievable. By using the IPA approach to modernisation, teams are able to evolve at the pace and stage that’s right for their business.  Skytap Cloud provides the platform to support traditional applications at each stage of modernisation, from the adoption of cloud infrastructure, to the acceleration of application development and delivery, to incorporation of cloud-native and next-gen technologies.


How CDOs can manage algorithmic risks


The immediate fallout from algorithmic risks can include inappropriate or even illegal decisions. And due to the speed at which algorithms operate, the consequences can quickly get out of hand. The potential long-term implications for government agencies include reputational, operational, technological, policy, and legal risks. To effectively manage algorithmic risks, traditional risk management frameworks should be modernized. Government CDOs should develop and adopt new approaches that are built on strong foundations of enterprise risk management and aligned with leading practices and regulatory requirements. Figure 1 depicts such an approach and its specific elements. ... Create an algorithmic risk management strategy and governance structure to manage technical and cultural risks. This should include principles, ethics, policies, and standards; roles and responsibilities; control processes and procedures; and appropriate personnel selection and training. Providing transparency and processes to handle inquiries can also help organizations use algorithms responsibly. From a policy perspective, the idea that automated decisions should be “explainable” to those affected has recently gained prominence, although this is still a technically challenging proposition.


3 Ways IT Leaders Can Deliver Business Transformation

IT leaders must proactively reach out to each department to identify and support their technology needs to enable a better way to do their jobs—and improve the result for customers. There are next-gen options to improve efficiency such as internal collaboration and contact center tools like Spark and SMS text or online chat and WebEx to bridge the gap among global employees. ... Security touches every transaction of business operations, making it paramount that businesses — and IT — operate with a security-first mindset. IoT, mobile devices, and the proliferation of threats (ransomware, phishing, and bots) can be a perfect storm for a security disaster. IT leaders and designated security teams must build with privacy, defense of networks and operations, as well as with information governance in mind and consistently revisit operational policies and security technologies. ... Storing information and applications on the right type of cloud solution delivers improved user experiences, more secure information management practices and more agile backup/recovery capabilities. 


Can People Trust the Automated Decisions Made by Algorithms?


In technology design, there has been a big trend towards making systems “seamless”. In short, this means that people can focus on what they want to do, not how they want to do it, which is usually really great for individuals to help them achieve what they want. Smart homes are an example of this, although many are a bit too clunky to have totally earned that title. Yet with a range of algorithmic systems today, too much seamlessness means that individuals don’t get a chance to question whether this system works the way they want it to. Your smart home might be personalised, but you can’t see where, and to whom, it is sending the data. Your Facebook news feed might seem compelling, but you can’t see who is being excluded, and why. We could run courses about algorithms in society, but that’s unlikely to solve deeper problems. Technologies move fast. My young cousin told me the other day that at school, they’d been learning about cybersecurity. “They told us not to click on pop-ups” she said. “But how will I know what a pop-up looks like?”. ... So consequently, we need to imbue education into the products and services we use everyday.



Quote for the day:


“We are what we repeatedly do. Excellence, therefore, is not an act but a habit.” -- Aristotle


Daily Tech Digest - June 20, 2018

Artificial Intelligence Has Companies’ Interest, But Not Their Cash


But while companies are interested in what A.I. can potentially do for them, many aren’t willing to invest massive amounts of money in the endeavor. Some 92 percent of respondents reported overall A.I. budgets of less than $5 million, with 52 percent paying less than $1 million. However, most plan to increase their A.I.-related spending over the next year. For companies pursuing an artificial intelligence strategy, Constellation recommends starting as early as possible, as building out platforms and training algorithms takes time. A company’s data scientists and executives must also start thinking about how to structure their existing databases for interaction with the eventual A.I. platform. ... If businesses are willing to spend more on A.I. initiatives in the short term, that means more jobs and projects to work on. Earlier this year, analyst firm McKinsey & Company published a report suggesting that fewer than 10,000 professionals actually have the necessary skills to tackle the biggest A.I. problems, which is a key reason why A.I.-related salaries have climbed into the stratosphere. That report also suggested that A.I.-enhanced industries constitute a multi-trillion-dollar market, placing a high ceiling on the demand for talent.



Istio and the Future of Service Meshes


A service mesh provides a transparent and language-independent way to flexibly and easily automate networking, security, and telemetry functions. In essence, it decouples development and operations for services. So if you are a developer, you can deploy new services as well as make changes to existing ones without worrying about how that will impact the operational properties of your distributed systems. Similarly, an operator can seamlessly modify operational controls across services without redeploying them or modifying their source code. This layer of infrastructure between services and their underlying network is what is usually referred to as a service mesh. Within Google, we use a distributed platform for building services, powered by proxies that can handle various internal and external protocols. These proxies are supported by a control plane that provides a layer of abstraction between developers and operators and lets us manage services across multiple languages and platforms. This architecture has been battle-tested to handle high scalability, low latency and provide rich features to every service running at Google.


No, Culture Does Not Eat Strategy for Breakfast

That much is true: Culture can definitely derail your success. But to imply that culture can eat strategy -- that culture is fundamentally a stronger driver of your success -- is where the metaphor starts to fail us. ... A cohesive culture cannot fix a bad strategy. The deeper truth here is that culture and strategy are really two sides of the same coin. A good friend of mine once said that culture does not eat strategy for breakfast but that strategy and culture need to go out to lunch. They both influence each other, so it is the extent of their alignment that is most important -- not the power one has over the other. Your culture should be intentionally shaped to make your chosen strategy more effective. That means, by the way, that if you make a major change to your strategy, you’ll also have to work to change your culture to match. It’s only when you see strategy and culture as separate that they start to eat each other. Consider the legendary focus on customer service within the culture at Zappos. They have every employee (regardless of the job function) go through customer service training and spend time answering customer calls on the phone. It’s that important to their culture. But what most people don’t mention is that this focus on customer service was a strategic choice.


When it Comes to Digital Transformation, Complacency Is Not an Option

light show
Digital transformation encompasses numerous business considerations, including cost, learning curve, migration strategy and a host of technical concerns. And while it brings with it the possibility of an incredibly positive change for the business, there’s the human element to consider as well. The reality is that people don’t like change. They have a comfort zone, and being forced to leave that zone can lead to a great deal of resistance. Having senior management stand firmly behind the introduction of new systems and new technologies from the outset can go a long way towards alleviating those fears. Buy-in from the senior team sends a message that the new technology is a strategic move wanted by and desired by the business. It also helps to have management articulate the many advantages of embracing new technology … rather than facing the risk of the business being overtaken by competitors. ... Regardless of how smoothly your digital systems are operating today, keep your eye on the horizon. Stay informed. Chat regularly with your customers and vendors. Read what both analysts and editors have to say. Have a discussion with a solution partner that’s closer to the technology than you are.


Microsoft to tighten screws on traditional Office, Gartner predicts

Gartner's prediction, if accurate, would expand a policy that Microsoft put in place more than a year ago. Under rules announced in April 2017, owners of perpetual-license versions of Office - those that, once purchased, let customers run the software as long as they want - can connect to Microsoft's cloud-based services only during the first half of their 10-year support lifecycle. Those services include Microsoft-hosted Exchange email, the OneDrive for Business storage service and Skype for Business. The new policy will be enforced starting Oct. 13, 2020, Microsoft said. By that date, Office 2010, Office 2013 and the current suite, Office 2016, will have dropped out of mainstream support, the label for Office's first five years. (Office 2016's mainstream support will expire on that date.) Office 2019, which is slated for release this fall, will reach the end of mainstream support sometime in late 2023. But Gartner believes that Microsoft will change the rules before then for that edition, telling customers that they must subscribe to Office 365 - and thus have Office 365 ProPlus - to use any cloud services.


Why blockchain won't transform IoT security

istock-904483728.jpg
When determining whether or not to work with blockchain, IoT developers must ask themselves one of the most basic computer science questions, Corman said: "Can I already do this with a distributed database?" Often, the answer is yes, he added. "It's not new technology," Lord said. "Most of what we talk about with blockchain is a better set of patterns for dealing with decentralized or distributed problems. It's not going to change how we deal with things in IoT." One way that blockchain could be useful is in distributing a policy change across different devices. However, anyone advertising blockchain as a security feature in an IoT device should at this point be met with skepticism, Lord said. IoT manufacturers also want to bring scale to their devices, said Rob Black, founder and managing principal of Fractional CISO. "When you think about blockchain, it's anything but scale—you're bringing along a huge ledger of transactions," Black said. "Compatibility with blockchain and IoT in today's environment is almost nonexistent."


Why digital transformation spending will reach $1.1 trillion – and what happens from here


Driven in part by investments from the manufacturing industries, smart manufacturing ($161 billion) and digital supply chain optimization ($101 billion) are the DX strategic priorities that will see the most spending in 2018. Other strategic priorities that will receive significant funding this year include digital grid, omni-experience engagement, omnichannel commerce, and innovation acceleration. The strategic priorities that are forecast to see the fastest spending growth over the 2016-2021 forecast period are omni-experience engagement (38.1 percent compound annual growth rate (CAGR)), financial and clinical risk management (31.8 percent CAGR), and smart construction (25.4 percent CAGR). "Some of the strategic priority areas with lower levels of spending this year include building cognitive capabilities, data-driven services and benefits, operationalizing data and information, and digital trust and stewardship," said Craig Simpson, research manager at IDC. To achieve its DX strategic priorities, every business will develop programs that represent a long-term plan of action toward these goals.


Chief data officers go to summer school


Carruthers said she considers herself “extremely fortunate to be part of a first generation of chief data officers – a role that requires a passion for creative thinking and a love of technology to develop pragmatic solutions for interesting challenges”. “Our CDO summer school combines these key elements, bringing together a community of like-minded CDOs for the first time to create an engaging learning environment for an exciting new generation of data leaders,” she said. Speaking to Computer Weekly, Southern Water’s Jackson stressed the importance of the community-building element of the course among a group of C-level leaders of recent vintage. “At present, there are [only] a few hundred data leaders, at that level, in UK organisations. And there is no well-trodden career path,” he said. The topic of data, and how trusted it can be, is an increasingly common one at boardroom level, and so someone needs to “own” that element of business decision-making, said Jackson. “And there is a real shortage of people, at that senior level, who combine business acumen with technical expertise. There is no university pumping out CDOs,” he added.


What is NVMe, and how is it changing enterprise storage

ssd
There are extremely latency-sensitive workloads that will benefit immediately from NVMe’s performance boost, such as real-time big-data analytics or high-performance database environments. Real-time analysis and response are becoming a necessity for businesses, and IDC predicts that between 60% and 70% of Fortune 2000 companies will have at least one mission-critical workload that leverages real-time big data analytics by 2020. “When an enterprise has a workload that, if it runs faster, drives a bottom-line business benefit such as more commissions, higher revenues, better customer service – those are the workloads that might drive a customer to say, ‘I’ve got to have NVMe today,’” Bergener says. The first storage platforms that leveraged NVMe instead of SCSI technology began to ship in 2016 and 2017 from startups including Apeiron Data Systems, E8 Storage, Excelero, Micron and Pavilion Data Systems. These systems are mainly targeted for specialized workloads with extremely latency-sensitive workloads, such as trading analysis.


Ransomware is taking a toll on banks. Here’s how they’re fighting back

Phishing campaigns still work best for delivering malware, Verizon report finds
To shore up its data protection, Glazebrook said the 101-year-old Adams Bank invested in technology and partnered with the IT resiliency firm Zerto. It recently installed a platform that immediately alerts the bank’s IT team if ransomware infects the system and can rewind to the minute it hit to fully recover the data in about an hour. (Glazebrook describes it as TiVo for ransomware.) The new system also allows for near-instant replication of data files, and IT staff can also remotely perform checks on system status. Of course, no matter what technology a bank implements, none can ever be truly safe from cyberattacks like ransomware, as security professionals and cybercriminals continually engage in a cat-and-mouse game, said Michael Hathaway, co-founder of Windmill Enterprise, which operates a blockchain security network called Cognida. “Unfortunately, there are no easy answers,” he said. One the one hand, many banks prefer to “keep all their data secure physically and in a single place, and then that can become point of attack,” he said. Having everything in one place “gives a sense of security but provides a place where hackers can try and continually get access to.”



Quote for the day:


"A leader does not deserve the name unless he is willing occasionally to stand alone." -- Henry A. Kissinger


Daily Tech Digest - June 19, 2018

What is a cloud architect? A vital role for success in the cloud

What is a cloud architect? A vital role for success in the cloud
There are plenty of paths to becoming a cloud architect — if you’re still early in your career, you might want to consider a formal degree program. But for those with experience in IT, Gartner suggests IT pros with the following skills and experience will find the transition easiest: Hilgendorf notes that the role of a cloud architect is “a new form of an enterprise architect,” and that it’s an easy transition from enterprise to cloud architect. However, the report cautions it’s best suited to those with “real hands-on program leadership”; Those with virtualization or infrastructure architecture experience are often a good fit for the cloud architect role, since “many cloud programs begin with simple IaaS projects, and virtualization architects are best-positioned to understand the technical nuances of a ‘virtualizationlike’ environment,” says Hilgendorf; Some of the biggest issues with cloud adoption arise with integration across the company. Integration architects are adept at working with complex systems, and they’re typically skilled at working across departments; and Employees who are known for rocking the boat or pushing the envelope with technology can serve as valuable liaisons to encourage company buy-in to new cloud technologies.



Blockchain is Coming for Storage

shutterstock 716609158
Each node contains the exact same data and transaction history, and that information is secured with cryptographed hashes and digital signatures. Combined in a shared peer-to-peer network, the nodes create a distributed ledger system where each node has equal rights. Furthermore, they are not dependent on each other; if one node leaves the network, the others still function because they have the same secured data.  The savings result from not having to deal with intermediaries or third parties, such as servers, which transmit the info back and forth, waiting for authentication and verification each time. Now, apply this model to data storage. By decentralizing storage, it no longer exists on a server, rather across a network of shared ledgers, each containing the same encrypted data. From a security standpoint alone, this has significant ramifications. Data breaches and hacks have typically focused on a centralized database — either on premises or in the cloud. Once the database or server is hacked, business is at least temporarily brought to a halt. In the blockchain model, if an attacker was able to breach one node, the others would still function, and so business continues. The same principle applies if there is a power outage.


Resource constraints can spark frugal digital innovation

The history of innovation in resource-constrained countries around the world shows that limited resources do not restrict innovation. Frugal innovation in underdeveloped countries has sparked major products, such as the ChotuKool refrigerator launched in India a few years ago for only about $50 (instead of the typical $500). Many mothers in the Mumbai slums, who had never had refrigeration, have bought this battery-operated refrigerator, and it has significantly changed their lives. ... Three recent innovations at Prysmian Group, the world’s largest manufacturer of energy and telecommunications cables, exhibits the kind of frugal digital innovation that can take place with constrained resources. Using a two-person internal team in collaboration with university the Politecnico di Milano, the Innovation Lab within the IT Department of this Milan-based company developed each of these innovations for less than 100,000 Euros, saving millions on each one. Among the first ideas created by the Prysmian Group’s new Innovation Lab was a drone-based monitoring system for inventory tracking in the company’s warehouses. Every facility stores hundreds of cable products, each weighing thousands of pounds.


Brace your IT operations team for a cloud migration


One of the first challenges an IT operations team encounters with a move to the cloud is licensing. When cloud bursting is involved, the licensing and payment models become especially complex. Also, not all on-premises applications and services are designed for the cloud, and some enterprises overpay as a result. Enterprises must also deal with the human element during a cloud migration. With an on-premises deployment, internal operations personnel monitor performance, as well as manage resources, updates and patches. However, once workloads move to the cloud, the provider will take over some of these tasks. But don't start downsizing just yet. Staff can find new roles, including working with cloud vendors to make sure applications integrate well with existing on-premises systems. In an on-premises environment, dev and ops teams define an application's resource requirements and then monitor that application to adjust those resources over time. Capacity management for physical server workloads was pretty straightforward -- with mostly linear growth -- but cloud adds a new set of complexities that could cost enterprises money.


Vulnerabilities in these IoT cameras could give attackers full control

By exploiting vulnerabilities in the internet-connected cameras from Axis Communications, researchers at security firm VDOO found that remote attackers could take over devices using just the IP address and without previous access to the camera or its login credentials. The vulnerabilities have been disclosed to Axis, which has updated the firmware of all the affected products in order to protect users from falling victim to an attack. In a blog post, VDOO states that "to the best of our knowledge, these vulnerabilities were not exploited in the field". In total seven vulnerabilities in the cameras were discovered and researchers have detailed how three of them could be chained together in order to provide remote access to the cameras and execute remote shell commands with root privileges. These include providing access to the camera's video stream, the ability to control where the camera is looking and to control motion detection and the ability to listen to audio. There's also the potential for cameras exploited in this way to be used as an entry point in the network for a wider attack, as well as the possibility of the camera being roped into a malicious botnet.


Working from home: 5 hidden downsides no one talks about

wfh.jpg
"Security is a huge, huge issue when you have remote workers," Carroll emphasized. Working off of a home network introduces all kinds of risks and vulnerabilities to your work files. Even if you've never had a cybersecurity issue on a personal device with your home network, that doesn't mean you are always safe."It's harder to maintain and control, because when people are working remotely, depending upon who their ISP is, their internet service provider, that opens up other probabilities and introduces other variables that aren't necessarily there when you're within a confined network within a workplace," Ryan confirmed. A Virtual Private Network (VPN) and multi-factor authentication are the viable solutions, Carroll said. With a VPN, a private, encrypted channel is connected between your device and a VPN server. No one but the user and the VPN sees or accesses the information, not even internet service providers! Multi-factor authentication is widely popular and helpful too. The user must provide at least two separate pieces of evidence proving their identity, which will then gain them access to the respective site.


What I’m Telling Business People About Why Relational Databases Are So Bad

Ever since relational databases were proposed, I have been puzzled as to why this seemingly bizarre architecture has been allowed to persist. This is like having your filing department speaking a foreign tongue so that all instructions have to be written down in this language. But it’s worse. When you store that timesheet in a relational database you have to totally take it apart, with the header information in one table and all the detail lines that assign hours to projects as separate rows in another table. You have to take apart the form and construct the SQL that takes those bits and stores them. Oh yes, and make sure you put sequence numbers on all those detail lines in the timesheet if you want to get be able to get them back in the same order. When you want the form back, you have to write SQL instructions to join the tables together and then you have to pick out all the timesheet information from the returned results and put it together as a form.


Design Patterns Explained – Service Locator Pattern with Code Examples


The goal of this pattern is to improve the modularity of your application by removing the dependency between the client and the implementation of an interface. Interfaces are one of the most flexible and powerful tools to decouple software components and to improve the maintainability of your code. ... All of these principles enable you to implement robust and maintainable applications. But they all share the same problem — at some point, you will need to provide an implementation of the interface. If that’s done by the same class that uses the interface, you will still have a dependency between the client and the implementation of the interface. The Service Locator pattern is one option for avoiding this dependency. It acts as a central registry that provides implementations of different interfaces. By doing that, your component that uses an interface no longer needs to know the class that implements the interface. Instead of instantiating that class itself, it gets an implementation from the Service Locator. That might seem like a great approach, and it was very popular with Java EE, but, over the years, developers started to question this pattern.


Cisco’s David Goeckeler talks security, networking, software and SD-WAN outlook

There’s some history to it but the short answer is you can’t build a world-class security architecture today without leveraging the network. That’s where the world has evolved to. A number of years ago a lot of security was about protecting the enterprise, and it still is. You should block everything you possibly can, but you can’t keep everything out. Everybody knows that. You can’t block everything. If you can’t block everything, there’s going to be something in your network. Now there’s something in your network, and the network is a pretty good place to defend and to look for it. There are several things the customers need to do. One is they need to what I call ‘constrain the operational space of the attacker.’ If somebody gets into your network through compromised credentials, which is a very prevalent technique. I get your credentials, and I can get in your network. You want to isolate them to only the part of the network they have access to. That’s segmentation. It turns out that’s the first thing we automated with DNA Center was software-defined access, which is like software-defined segmentation. It helps you protect your network. The problem with segmentation is it’s hard to implement, so we automate it.


Your Online Data Is In Peril. The Blockchain Could Save It

With blockchain, Lubin claimed, “The business model of exploiting people [and] personal information is going to change. I think it’s going to be even better, potentially, for those companies. They’ll be less exposed to the risk if we are controlling our own data, encrypted, and enabling it to be selectively disclosed in situations that we designate.” Imagine owning all your digital health care records and granting providers or insurance agents access only to the data of your choosing. These kinds of experimental blockchain technologies will require cautious and careful experimentation worth investing in. But the goal shouldn’t be “hyper growth” and fast returns on those investments. There are lots of questions about how blockchain will work in the wild. How should it be regulated? Can it run without consuming vast amounts of energy? Is the technology mature enough to really go mainstream? A couple months ago, I discovered another use for blockchain and a project that might be ready for prime time. A collective of developers and journalists are launching a radical blockchain experiment called Civil.



Quote for the day:


"A point of view can be a dangerous luxury when substituted for insight and understanding." -- Marshall McLuhan


Daily Tech Digest - June 18, 2018

The right kind of AI: why intelligence must be augmented, not artificial

null
Einstein is a layer within the Salesforce platform that helps users to make best use of their data, delivering insight that allows them to truly focus on their customers. It does so by utilising the computing power of AI, a technology at the heart of everything Salesforce is trying to achieve.  Like many observers, Salesforce believes that AI is set to be the dominant technology of the next decade and that understanding customers is best achieved by AI. It allows users to address a number challenges, such as: learning from the data coming into the organisation; improving sales engagement; being proactive in customer service problem solving; becoming more predictive in addressing issues before they become a real problem.  Salesforce Ventures has announced a new $50 million fund to encourage startups to build AI-fuelled applications on top of Salesforce. This overall change of focus is reflected in the apps that are proving most popular within AppExchange.  AI’s ability to automate certain tasks and augment any number of others, and to bring enormous insight based on big data is behind this rise in AI-based apps.


Effective application security takes a team effort

When it comes to application security, the DevOps team have the hardest job of all. Actionable vulnerability data is rarely available during actual development cycles, meaning many security flaws only surface once an application has already gone live. Furthermore, due to time constraints imposed by senior leadership, DevOps teams are often confined to conducting security assessments at the last minute, just prior to release, which is far too late in the day to be effective. DevOps teams need to work closely with security professionals and senior leadership to build security into the entire development lifecycle. Moving to a continuous integration process can help with this, as can the use of both dynamic scanning and source scanning throughout the development and implementation phases. It’s also the role of DevOps to demonstrate to senior leadership that a slightly longer development phase is far more preferable to repeating the entire process multiple times due to vulnerabilities only being discovered after release. However, this is only possible if both DevOps and security professionals can communicate effectively up the chain of command, without fear.


Myth-buster: the real costs of Public and Private Cloud

Myth-buster: the real costs of Public and Private Cloud
Private cloud infrastructure is generally perceived as costly due to the consultative element and ongoing management costs. Despite public cloud seeming the far more cost-effective option on the surface, there are some hidden costs attached. For example, there is a hidden charge attached to the cost of moving data traffic between the various physical and virtual machines used by the public cloud. Public cloud providers generally charge an additional 20% on top of the fees charged by the platform providers themselves. Another hidden cost of public cloud is the background management and maintenance services, which are of course necessary for any type of cloud infrastructure. Finally, the question of reversibility is key. When a firm migrates its infrastructure from one cloud to another or to an internal architecture, the costs involved are often underestimated. Once all operational developments have been tailored specifically for a public cloud, the cost of migrating away from that can quickly become expensive! And that’s not even including the migration costs involved when transferring data to an external platform, which can also be high.


RegTech: The future of market abuse surveillance

Despite the risks, using independent solutions for different forms of data is currently the norm. In fact, 70% of respondents to a PwC market abuse surveillance survey2 are using three or more software vendors to execute their surveillance requirements and 75% are unable to review trade alerts alongside contemporaneous electronic communications or voice alerts. Further, alerts generated by multiple systems are typically reviewed manually by separate compliance teams that do not have easy access to each other’s information. Such dispersion impedes firms from having a true 360° view of employee behavior and limits the ability to stay ahead of emerging risks. Adding to the problem, data volumes and sources have also been increasing as the methods that traders use to communicate on a daily basis – from cell phones to chat apps to social media – continue to diversify. Communications surveillance also typically uses lexicon-based search techniques, which tend to produce high volumes of false positives and potentially miss true suspicious behavior. Finally, there are challenges associated with high volumes of false positives,3 some stemming from legacy systems and scenarios, which may not be calibrated with the current business landscape and risks.


Turn on the lights! The key challenges for businesses transformation

tiny figures move toward a giant lit bulb amid a line of burned-out lightbulbs
Waste can simply be defined in terms of its opposite – value. While costs are inevitable for all businesses, waste is optional. When a company is creating value and nothing but value, there is no waste. However, no system is that efficient and there will always be some waste which is uneconomical to remove. But a large percentage of organisational waste, anything from 50% to 70% - based on most studies- when removed, provides a healthy return on investment while contributing to competitiveness. And this is why we turn the lights on. And while those lights are on, it is important that all can see the mess for what it is. It will almost certainly be a lot bigger than anyone had imagined, and leaders need to be prepared for that. They also need to be prepared to forgive, because if they don’t, the waste will simply go back underground. Keeping the lights on means focusing not on the people, and not on the waste, but rather on the causes of the waste. If time and resources are spent only on cleaning up the mess, things will get dirty again very soon. The endgame here is to understand and deal with the institutional practices and structures that are endemic to the creation of institutionalised waste.


The digital transformation and the importance of humans

Change always creates stress and uncertainty for us as human beings. In my day-to-day work at Siemens, I often notice, however, that many people are generally open to change as such. In fact, employees often want things to change. As a rule, the difficulties arise as soon as they have to try out new things and implement concrete changes themselves. Then I often hear statements like: “I don’t even know where to begin.” Or: “I have so much to do and no time for anything else.” And that’s exactly where the problem is: we have to understand that change isn’t “deferrable,” let alone a phase that ends at some point. We can’t cut ourselves off from new developments, nor can we reduce the speed at which changes occur. To keep pace, we’ve got to adapt and move faster – as people, as a company and as a society. We’ve got to be open for new things and leverage digitalization and its opportunities in such a way that they help us increase the quality of life and benefit companies as a whole. To accomplish this goal, we have to do some things differently than we have in the past. And this shift can’t happen without a culture change.


Cisco makes SD-WAN integration a top priority

Cisco makes SD-WAN integration a top priority
“The branch is a very complicated part of the network when you think about it because when you think about a global business where you’ve got all the different types of interconnect you have around the world and you’re trying to manage all that. That part of the network is going to a software-defined WAN, and it’s an area we’ve been investing in heavily,” said David Goeckeler, executive vice president and general manager of networking and security at Cisco, in an interview with Network World.  “We had an iWAN solution. We have an SD-WAN solution from Meraki, and then we purchased Viptela because they had innovated on the cloud side of it, and we wanted to marry that up with the [Integrated Services Router] franchise that we had in iWAN, and we are well down the path of that integration. And I think we’re seeing big projects move forward now in the SD-WAN space. It’s a market that had been kind of stalled because I think customers were trying to figure out what to do,” he said. Other Cisco executives reiterated the importance of getting Viptela further integrated into the company’s networking portfolio. “One of the important parts of what Viptela brings is an easy way to implement really strong end-to-end segmentation that lets users build and secure different segments of their networks,” said Scott Harrell


Does Cyber Insurance Make Us More (Or Less) Secure?

insurance policy
Cyber insurance policies can be complex and tricky to understand, and anxious C-suite executives are buying cyber insurance often without understanding the full extent of what policies cover and what they don't. To grow the market and diversify the risk, insurance companies are taking on all comers, often with no adequate measure of the true risk any given insured enterprise faces. Both insurance carriers and enterprise buyers of cyber insurance are groping their way forward in the dark, a potentially dangerous scenario. Most insurance carriers, however, are aware of this blind spot, and researching how to better measure and quantify cyber risk. Measuring cyber risk is very different than in other domains. If you want to rate the risk of an earthquake or a hurricane, the actuarial science is sound. A data center in a hundred-year flood plain can expect a catastrophic flood once in a hundred years. Cyber risk, on the other hand, remains far harder to quantify — a problem, it must be noted, the insurance business is working hard to solve.


How to know when data is 'right' for its purpose

There are certainly scenarios where IT can answer the “right data” question with a confident yes or no and with only the most minor qualification. That is with metrics and calculations because there is always a right answer when math is involved. The qualification would be that IT has the correct definition and of course the underlying data has been populated consistently. Another option of working through this challenge is to clarify the expectation of the business user. Asking a few more questions to ascertain the true need and the reason behind the question can help frame the answer tremendously. Is the question based on previous instances of “bad” data? Again, “bad” data is relative and is always from the perspective of the business user. If so, then framing the response to highlight improvements in the consistency and validation of the source data may reassure and meet the users’ needs. Maybe the question is related to reference data that had not previously been governed or monitored. If so, walking through the steps taken to evaluate validity against a set of expected results will start building the confidence in the final product.


Default Interface Methods in C# 8


The main benefit that default methods bring is that now it is possible to add a new default method to an existing interface without breaking the classes that implement that interface. In other words, this feature makes it optional for implementers to override the method or not. An excellent scenario for this feature is the logging example that is described below; the ILogger interface has one abstract WriteLogCore method. All of the other methods, like WriteError and WriteInformation, are default methods that call the WriteLogCore method with a different configuration. The ILogger implementer has to implement only the WriteLogCore method. Think of how many lines of code that you have saved in each inherited class of the logger type. While this feature can be a great thing, there are some dangers as it is a form of multiple inheritances. Hence it suffers from the Diamond Problem, which is described below. Also, the interface methods must be "pure behavior" without the state; that means the interfaces, still, as in the past, cannot directly reference a field.



Quote for the day:


"If you're not failing once in a while, it probably means you're not stretching yourself." -- Lewis Pugh


Daily Tech Digest - June 17, 2018

Making Blockchain Enterprise Ready

Cisco Blockchain Platform
Today, however, many enterprise blockchains are running in the cloud and often in a single datacenter. This goes against the philosophy that blockchains should be fully decentralized and opens up security risks associated with managing data in a single location. Cisco believes the next generation of enterprise blockchain systems should be fully decentralized and offer hybrid deployment models, where blockchain nodes can be on-premise behind firewall systems or use public cloud nodes or some combination of the two. In addition, business leaders need to think holistically about end-to-end security. A common misconception is that the distributed nature of blockchains makes them inherently secure, and in many ways, they are. However, without the correct design measures, they can be prone to multiple threats, for example through the exploitation of infrastructure-level vulnerabilities. Our approach involves end-to-end security architecture, leveraging analytics that spans the infrastructure layer through to the application layer.


Untitled-16
What has changed in recent years in the promise of AI is the coming together of very advanced computing power with highly sophisticated algorithms and networks that collaborate to recognize patterns, discern meaning from vast sets of data and train themselves to get better at these tasks.  For the enthusiasts, the big question in AI might be when we will achieve machine super-intelligence, or Singularity, the point at which machine intelligence explodes to vastly surpass human intelligence. It is chillingly described as the last invention humans will make. But for a country at India’s level of socio-economic development, the suite of AI technologies can be applied effectively ... The NITI Aayog report proposes an umbrella organisation to shape and implement India’s AI policies—from stitching global partnerships to picking specialised teams to pursue audacious moonshot projects. It identifies five focus areas and a two-tiered institutional structure - government-led CORE or Centre of Research Excellence to focus on core research and private sector-led ICTAI or International Centers of Transformational AI to focus on application-based AI research.


Communicating Architecture with Stakeholders

Because architects are often visually oriented, we tend to show the fruit of our efforts by means of diagrams. But this is not always the best option; it mainly works for 1) designing and 2) visually oriented people. In management, we often find people with a legal background. They are ill-served by diagrams and other pictures, but instead require text, while those with a background in finance need numbers, tables, and charts. Speaking their “language” is an important part of effecting the change you want to see.  The information contained in architecture models can be displayed in many forms – not just diagrams. In particular, the various dependencies between the elements of your architecture can be exploited to create cross-cutting insights that serve the needs of various stakeholder groups. Displaying this information in multiple ways may help different types of stakeholders gain the understanding they need to move forward.  Below, you see the technology obsolescence risk of (part of) the business capabilities of an organization. The end-of-life dates of various technologies, as provided by Technopedia, are aggregated across the entire architecture via, in this instance, applications and business processes supporting each capability.


Setting Up Ethereum Blockchain On Azure

Blockchain
This article will focus on working with Ethereum on the Microsoft Azure platform. Azure is a cloud platform from Microsoft and could be used for other beneficial purposes as well. We’ll use consortium from Azure. A consortium in Ethereum’s context means a collaboration between supporting members of a blockchain with underlying infrastructure. Consortium helps us configure the block chain and its controlling parts on Azure. We create a Genesis block and a default account with initial Ether allocations. Setting up consortium here would be a private blockchain and would not be part pf public Ethereum blockchain that means Ethers from private blockchain could not be transferred or exchanged directly with Ether from public blockchains. Azure helps us to set up the baseline blockchain infrastructure by providing the templates. We’ll first set up a blockchain consortium on Azure and Azure will create a genesis block for us. If one does not have a paid Azure account, one could leverage Azure’s new account’s benefits of giving $200 credits. That means if you are new to Azure and wants to play around with its free trial, you’ll get $200 credits that you can use to explore Azure. If you are new to Azure and do not have an account, follow following process, else directly login to your portal.


Drivers Of Crypto Disruption: Will Public Or Private Blockchains Lead The Way Forward?

Companies may have good reasons for wanting to avoid implementing public blockchain infrastructure. Chief among their concerns would be that they want to keep their information and transactions private. Firms need to protect their intellectual property and to safeguard critical pieces of information. A private blockchain can function like its public counterparts with the exception of a permissioning system where only authorized users could interact with the blockchain. External parties such as accountants, auditors, and regulators could be brought into the structure in order to monitor business activities in real time. As such, blockchain creates the possibility for what many are calling a triple entry accounting system. Transactions are not just recorded as debits and credits on separate books when companies do business with one another, but the transactions can be easily verified on a blockchain as well. This has the potential to eliminate some of the chief mechanisms for fraud even if it will prove challenging to eliminate fraud completely.


2018 Banking Industry Outlook Six themes driving banking industry trends

Figure 1: Six macro themes and five banking businesses
For banks globally, 2018 could be a pivotal year in accelerating the transformation into more strategically focused, technologically modern, and operationally agile institutions, so that they may remain dominant in a rapidly evolving ecosystem. This metamorphosis is far from easy as most banks grapple with multiple challenges: complex and diverging regulations, legacy systems, disruptive models and technologies, new competitors, and, last but not least, an often restive customer base with ever-higher expectations. ... Although banking has undoubtedly improved in many ways in the last couple of decades, most organizations have not gone through the customer-centric transformation that other industries have undergone. With widespread digital disruption, banks may even risk losing control over customer experience. Of course many banks, global and local, large and small, have changed their market and customer strategies since the financial crisis. Many of these decisions may have been forced upon them by regulatory expectations, and perhaps are not necessarily grounded in a refined understanding of markets and customers.


Image Based Risk Assessment In Cancer

Even before machine learning and deep learning, image analysis software was available that could identify edges, boundaries, geometric shapes, and density distributions, as well as texture analysis, as aids for the pathologist. Subsequently, machine learning of various types such as random forests and support vector machines were able to learn from the features and identify lesions. With the advent of deep learning with multiple connected processing layers, it was no longer necessary to extract such features from an image before using those features as representations of the image to input into the algorithm. The most widely used deep learning model is the convolutional neural network (CNN). A CNN can do feature extraction automatically and analyse that information directly. All that a CNN needs is a (suitably labelled) training set of the images themselves. Rather than working with the (large number) of pixels in the image directly, the initial layers of the CNN will generate increasingly complex representations of those images which, though much smaller than the original, capture important features that the rest of the network uses for analysis and classification.


Risk Management and Business Models on NEO Blockchain


The original blockchain architecture for Bitcoin is not built for Turing complete Smart Contracts; we need another architecture for large scale decentralized applications that fulfill the requirements from business and economic use cases. As such, PoW and PoS are both unsuitable. To meet the requirements, NEO Blockchain is built on delegated Byzantine Fault Tolerance (dBFT), an adapted BFT algorithm by Erik Zhang. The advantage with dBFT is that it cannot branch. There is always one and only one block proposed at any single time. ... The transaction is completed directly after it is included in a block, in other words in one single confirmation. It also means that the transaction time is actually completely equal to block time; if block time is 5 seconds, then the transaction is received in 5 seconds. Moreover, there is no risk management needed when sending and receiving large volume and high value transactions. With no branching during operation, all transactions are 100% complete after they are included in a block, so no risk analysis on finality has to be made.


Democratizing Stream Processing with Apache Kafka & KSQL


To show the value of stream processing, let’s pick a motivating example that is applicable across many different industries. Imagine that you need to create and maintain, in real-time, 360-degree profiles for your customers. This would be useful for many reasons such as: To create a better customer experience. For instance, "This premium customer tried several times in the past five minutes to checkout his shopping cart but failed because our latest website update was buggy, let’s offer him a discount immediately and apologize for the bad user experience."; and To minimize risk. For instance, "This new payment seems fraudulent because the payment originated outside the US, yet the customer’s mobile phone app reports she is in New York City. Let’s block the payment and contact the customer immediately." What is needed for this use case is to bring together, in real-time, data from a variety of internal and possibly also external channels, and then consolidate this information in comprehensive customer profiles (sometimes called customer 360-degree profiles) that are continuously updated as soon as new information is available in any of the channels.


DARL and Whitebox machine learning

The most common is supervised learning. This is where you have collected multiple examples of the inputs and outputs of some process you want to learn, recorded in a database, or some code representation like XML or Json, and the machine learning algorithm tries to create a model that reproduces the outputs when presented with the inputs.  If you remember from the previous article, DARL inputs and outputs can be Textual, Categorical, Numeric or Temporal. Machine learning here is limited to categorical and numeric inputs and outputs. Learning is limited to a single output at a time. If that output is categorical then classification has been performed, if numeric then prediction.  The data used to train the data is a training set, and some of the data may be put aside to form a test set. with this machine learning algorithm you specify the percentage of the data to train on and the system will randomly split the data into two groups. Although problems that have an existing analytic solution are sometimes used to test ML algorithms, for instance getting a model to copy some logical relationship, in the real world no one in their right mind would use a machine learning algorithm to learn something for which an analytic model



Quote for the day:


"Obstacles can't stop you. Problems can't stop you. Most of all, other people can't stop you. Only you can stop you." -- Author, Jeffrey Gitomer