Daily Tech Digest - July 13, 2018

Bill Hoffman of the Industrial Internet Consortium talks AI, IoT and more
Price point, availability, wired or wireless, then the fact that you can hook them up to the Ethernet – IPv6 provides almost no degradation of performance – so you can put a lot of stuff on it, which we also couldn’t do 30 years ago. We were all running Novell local area nets at the time! Who remembers Novell? So I think the technology has become much more robust and available, such that we’re able to use the big data and apply predictive analytics, and use these things in industrial systems that we couldn’t have even dreamed of 20 years ago. And when we say “industrial” [Internet of Things], the “Industrial Internet” is really an “industry” Internet, not just manufacturing per se. “Industrial Internet” was actually a term of art GE had coined, back in, I believe, 2013, and they didn’t trademark it intentionally, because they wanted it to remain a term of art. So when Richard [Soley] and I sat around the table with the five founders [of the Industrial Internet Consortium], we had hours of discussion about what to call this new entity we were going to create.



Cryptocurrency Exchange Developer Bancor Loses $23.5 Million

Some of Bancor's losses, however, are recoverable. Bancor says it has recouped $10 million worth of BNT, a type of token that facilitates trades within its exchange. How Bancor executed that recovery, however, leads into a heated debate among cryptocurrency enthusiasts. BNT differs from bitcoin in that it is a centrally generated token. New bitcoins are created through a process called mining, in which computers that verify transactions on the network are rewarded with a slice of bitcoin. BNT, like many other cryptocurrencies such as Ripple's XRP, Cardano's Ada, Block.one's EOS and Stellar's Lumens, isn't mined. These types of coins have powered Initial Coin Offerings, where an organization creates a centrally issued coin and sells it to raise funding. ICOs, which some contend could expose investors to fraud, are being closely analyzed by regulators around the world. The question is whether the coins or tokens that are issued are more like securities akin to stocks rather than an asset. The sale of securities often entails a different set of stricter trading rules


Peer Reviews Either Sandbag or Propel Agile Development


First, peers likely provide valuable feedback and have fresh eyes to catch mistakes that you might miss after spending hours working. Second, working on a fast-moving Agile team, you need to continually build consensus so that there is not a communication backlog. Lastly, for teams working in highly-regulated industries, peer reviews may be a required piece of a larger software assurance program. As more software development teams trend toward an Agile approach, software releases are becoming more frequent. If you are not able to speed up your peer review cycles in tandem, you may start to sacrifice quality to hit deadlines. That then translates to a buildup of technical debt. How can you avoid this scenario? It takes structure, but flexible structure. ... Most teams don’t have an explicit plan around their internal communications. The tools that they employ typically dictate the communication norms. If your team adopts Slack or another messaging app, then it quickly becomes common for folks to have short, timely chats. The expectation is that the other person replies within a relatively short timeframe.


Doing Performance Testing Easily using JUnit and Maven

Sometimes, we tend to think that performance testing is not part of the development process. This is probably due to no stories getting created for this, during the usual development sprints. This means the important aspect of a product or service APIs is not not taken care of. But that's not the point, the point is why do we think that it should not be part of the usual development cycle ? or... Why do we keep this towards the end of the project cycle? Also to add more ground to the above thinking, there are no straight forward approaches to doing performance testing like we do unit testing or feature/component testing or e2e integration testing or consumer-contract testing. Then the developers or the performance-testers (sometimes a specialized team) are asked to choose a standalone tool from the market place and produce some fancy reports on performance testing, share those reports with business or technology team. That means it is done in isolation and sometimes after or towards the end of the development sprints, approaching the production release date.


Government Bodies Are At Risk Online

Commitment to Online Trust and Security
Busy government staff don’t always have the time to learn cybersecurity best practice. Government employees working in departments such as planning, finance, human resources and the administration staff that support them, have intense workloads – so it’s important they can work quickly and efficiently, without compromising their safety online. It’s thought that as many as 95% of successful online hacks come down to human error. Mistakes are made by those who aren’t educated in online risks and can’t spot threats to their data. Sometimes it’s not a lack of knowledge, but a problem with relying solely on human performance. Even the most educated person can make mistakes that cause huge data breaches. Government organisations need to limit the risk of human error as much as possible. If it’s a case of staff reusing static or simple passwords that can be stolen using brute force attacks, then 2FA can be a solution. Once it has been used, successfully or unsuccessfully, then it becomes invalid.


Building the future of retail with the Internet of Transport

Wincanton wants to use sensors to automatically alert its employees to any potential deterioration in products during transportation. As part of this project, Gifford says the firm's technological efforts have produced developments in three key areas so far. He points first to Winsight, an app that enables a paperless cab, so all the paper lorry drivers normally carry, such as routes and proof of delivery, is wrapped up into a single piece of software on a smart device. The app is available to the firm's own drivers and sub-contractors. The second key element is telematics. "That's about us plugging into the vehicle's systems and sending information back to the business in a consistent way," says Gifford. Wincanton recently announced it will install MiX telematics in 1,800 of its vehicles as part of an ongoing safety programme, with information used to optimise driver performance. The final element is the implementation of a new, cloud-based transport management system (TMS). This TMS will form the basis for the firm's digital supply-chain strategy, with telematics helping to hone operational performance and Winsight helping to ensure business efficiency and effectiveness.


Here come the first blockchain smartphones: What you need to know

Sirin blockchain phones
It appears the world's third-biggest handset maker may win a race to become the industry's first to offer a blockchain smartphone; Swiss-based Sirin Labs announced its own $1,000 smartphone and $800 all-in-one PC with native blockchain capabilities last October; it scheduled the release for this September, according to reports. HTC, however, plans to release its phone this quarter. HTC's blockchain phone has already received "tens of thousands" of reservations globally, Phil Chen, the chief crypto officer at HTC, said in an interview during the RISE conference in Hong Kong this week. Like HTC's upcoming $1,000 Exodus blockchain smartphone, Sirin's Finney smartphone will come with a built-in cold-storage crypto wallet for storing bitcoin, Ethereum and other digital tokens, and it will run on open-source, feeless blockchain. Sirin was able to raise more than $100 million in an initial coin offering for the Android-based Finney smartphone and PC. Both will run Sirin's open-source operating system, SIRIN OS.


Apache Mesos and Kafka Streams for Highly Scalable Microservices

Apache Mesos is a cluster manager that provides efficient resource isolation and sharing across distributed applications or frameworks. It sits between the application layer and the operating system. This makes it easy and efficient to deploy and manage applications in large-scale clustered environments. Apache Mesos abstracts away data center resources to make it easy to deploy and manage distributed applications and systems. DC/OS is a Mesosphere-backed framework on top of Apache Mesos. As a datacenter operating system, DC/OS is itself a distributed system, a cluster manager, a container platform, and an operating system. DC/OS has evolved a lot in the past couple of years, and supports new technologies like Docker as its container runtime or Kubernetes as its orchestration framework. As you can imagine from this high-level description, DC/OS is an first-class choice infrastructure to realize a scalable microservice infrastructure.


Your Roadmap to an Open Mobile Application Development Strategy


An MADP allows a business to rapidly build, test and deploy mobile apps for smartphones and tablets. It can minimize the need for coding, integrate building-block services, such as user management, data management and push notifications, and deliver apps across a broad array of mobile devices. The result is a common and consistent approach, so developers can customize their apps without worrying about back-end systems or implementation details. Michael Facemire, principal analyst at Forrester Research, observed in an analysis of Mobile Development Platforms that companies fall into two camps: “those that prefer an all-inclusive platform”, who represent the greatest, though waning part of platform spend today, and “those that prefer to manage a collection of services”. The first group of customers work with large infrastructure vendors, such as IBM, Oracle, and SAP, who offer complete environments for development, delivery, and management of mobile applications. They benefit from platform stability and custom support, but may struggle compared with other platforms when building mobile experiences outside of their proprietary ecosystems.


Hacker-powered security is reaching critical mass

“Crowdsourced security testing is rapidly approaching critical mass, and ongoing adoption and uptake by buyers is expected to be rapid,” Gartner reported. Governments are leading the way with adoption globally. In the government sector there was a 125 percent increase year over year with new program launches including the European Commission and the Ministry of Defense Singapore, joining the U.S. Department of Defense on HackerOne. Proposed legislations like Hack the Department of Homeland Security Act, Hack Your State Department Act, Prevent Election Voting Act, and the Department of Justice Vulnerability Disclosure Framework further demonstrate public sector support for hacker-powered security. Industries beyond technology continued to increase share of the overall hacker-powered security markets. Consumer Goods, Financial Services & Insurance, Government, and Telecommunications account for 43 percent of today’s bug bounty programs. Automotive programs increased 50% in the past year and Telecommunications programs increased 71 percent.



Quote for the day:


"Experience without theory is blind, but theory without experience is mere intellectual play." -- Immanuel Kant


Daily Tech Digest - July 12, 2018

WAFs Should Do A Lot More Against Current Threats Than Covering OWASP Top 10
Organizations invest to increase network capacity, ultimately accommodating fictitious demand. Accurate distinction between human traffic and bot-based traffic, and between “good” bots (like search engines and price comparison services) and “bad” bots, can translate into substantial savings and an uptick in customer experience. The bots won’t make it easy on you now as they can mimic human behavior, bypass CAPTCHA and other challenges. Moreover, dynamic IP attacks render IP-based protection ineffective. Often times, open source dev tools (Phantom JS for instance) that can process client-side JavaScript are abused to launch brute-force, credential stuffing, DDoS and other automated bot attacks. To manage the bot-generated traffic effectively, a unique identification (like a fingerprint) of the source is required. Since bot attacks use multiple transactions, the fingerprint allows organizations to track suspicious activity, attribute violation scores and make an educated block/allow decision at a minimum false-positives rate.



Why your whole approach to security has to change imageNew technology philosophies like DevOps and Agile provide the opportunity to build security into the whole lifecycle that exists around IT use. By embedding proper security processes around cloud resources, companies can make their workflows deliver security into the fabric of this new architecture from the start. Getting this degree of oversight and security in place involves making security goals and objectives clear to everyone, while also enabling those processes to run smoothly and effectively. It involves making security management into more than just a blocker for poor software; instead, it is about making services available quickly within those workflows. This process is termed transparent orchestration. Transparent orchestration involves a re-wiring of security to match how this IT infrastructure has been rebuilt. As part of this, security must be automatically provisioned across a complete mix of internal and external networks, spanning everything from legacy data centre IT through to multi-cloud ecosystems and new container-based applications.



Top 3 practical considerations for AI adoption in the enterprise

Artificial intelligence computer brain circuits electronics grid
Explainable AI centers on the ability to answer the question, “Why?” Why did the machine make a specific decision? The reality is many new versions of AI that have emerged have an inherent notion of a “black box.” There are many inputs going into the box, and then out of it comes the actual decision or recommendation. However, when people try to unpack the box and figure out its logic, it becomes a major challenge. This can be tough in regulated markets, which require companies to disclose and explain the reasoning behind specific decisions. Further, the lack of explainable AI can affect the change management needed throughout the company to make AI implementations succeed. If people cannot trace an answer to an originating dataset or document, it can become a hard value proposition to staff. Implementing AI with traceability is a way to address this challenge. For example, commercial banks manage risk in an online portfolio. A bank may lend money to 5,000 small- to medium-sized businesses. It will monitor their health in balance sheets within the portfolio of loans. These sheets may be in different languages or have different accounting standards.



Blockchain as a Re-invention of the Business
The current model is very much authority-centric, which leaves a narrow space to individuals and small legal entities who are mere spectators or limited contributors. If we take into account the democratization of choice that boosts up the people’s motivation nowadays, we can come up to the conclusion that the current approach stands against the right to self-empowerment. Blockchain, a wonderful combination of mathematics and technology made it possible to distribute the power into the nations where it actually belongs. “With great power comes great responsibility," as the Marvel comics super-heroes use to say. This could be a motto of DLT. Blockchain is a natural service area that hooks up to the Internet as the connectivity layer. Business globalization and economic freedom are two main forces of a paramount significance underpinning the evolution of the distributed transactional platform. The central system played an absolutely vital role in times of corruption and global disorders or wars. In the current reality, people deserve to operate within a planetary technological network.


A Look at the Technology Behind Microsoft's AI Surge

Lambda architecture, while a general computing concept, is built into the design of Microsoft's IoT platform. The design pattern here focuses on managing large volumes of data by splitting it into two paths -- the speed path and the batch path. The speed path offers real-time querying and alerting, while the batch path is designed for larger data analysis. While not all AI scenarios use both of these paths, this is a very common edge computing pattern. At the speed layer, Azure offers two main options -- Microsoft's own Azure Stream Analytics offering and the open source Apache Kafka, which can be implemented using the HDInsight Hadoop as a Service (HDaaS) offering, or on customers' own virtual machines (VMs). Both Stream Analytics and Kafka offer their own streaming query engines (Steam Analytics' engine is based off of T-SQL). Additionally, Microsoft offers Azure IoT and Azure Event Hubs, which connect edge devices (such as sensors) to the rest of the architecture. IoT Hubs offer a more robust solution with better security; Event Hubs are specifically designed just for streaming Big Data from system to system.




U.S. regulators grappling with self-driving vehicle security
U.S. Transportation Secretary Elaine Chao said in San Francisco on Tuesday that “one thing is certain — the autonomous revolution is coming. And as government regulators, it is our responsibility to understand it and help prepare for it.” She said “experts believe AVs can self-report crashes and provide data that could improve response to emergency situations.” One issue is if self-driving vehicles should be required to be accessible to all disabled individuals, including the blind, the report noted. The Transportation Department is expected to release updated autonomous vehicle guidance later this summer that could address some of the issues raised during the meetings. Automakers, Waymo, a unit of Alphabet Inc, and other participants in the nascent autonomous vehicle industry have called for federal rules to avoid a patchwork of state regulation. However, the process of developing a federal legal framework for such vehicles is slow moving.


The rise of artificial intelligence DDoS attacks
ddos attack
The major turning point in the evolution of DDoS came with the automatic spreading of malware. Malware is a phrase you hear a lot of and is a term used to describe malicious software. The automatic spreading of malware represented the major route for automation and marked the first phase of fully automated DDoS attacks. Now, we could increase the distribution and schedule attacks without human intervention. Malware could automatically infect thousands of hosts and apply laterally movement techniques infecting one network segment to another. Moving from network segments is known as beacheading and malware could beachhead from one part of the world to another. There was still one drawback. And for the bad actor, it was a major drawback. The environment was still static, never dynamically changing signatures based on responses from the defense side. The botnets were not variable by behavior. They were ordered by the C&C servers to sleep and wake up with no mind for themselves. As I said, there is only so much bandwidth out there. So, these type of network attacks started to become less effective.



Automation could lift insurance revenue by $243 billion
First, explaining the vision clearly and securing leadership buy-in. “By establishing a clear and compelling vision, organizations demonstrate that intelligent automation is a strategic imperative and are able to answer critical questions,” the report says. Second, developing a clear pilot process. “The automation business case will need to assess the impact on transaction processing time and employee time saved and consider variables such as the volume of transactions or the number of exceptions in a specific process,” according to Capgemini. Firms should also consider starting with “low-hanging fruit” and engaging talent through hackathons and accelerators, the report says. Third, scaling up with an automation center of excellence. “To promote effective collaboration with the CoE, organizations should consider incentivizing functions based on business benefits derived from implementation of intelligent automation,” Capgemini suggests. Fourth, industrializing automation.


In-memory computing: enabling continuous learning for the digital enterprise
speech bubble constructed from abstract letters
Today’s in-memory computing platforms are deployed on a cluster of servers that can be on-premises, in the cloud, or in a hybrid environment. The platforms leverage the cluster’s total available memory and CPU power to accelerate data processing while providing horizontal scalability, high availability, and ACID transactions with distributed SQL. When implemented as an in-memory data grid, the platform can be easily inserted between the application and data layers of existing applications. In-memory databases are also available for new applications or when initiating a complete rearchitecting of an existing application. The in-memory computing platform also includes streaming analytics to manage the complexity around dataflow and event processing. This allows users to query active data without impacting transactional performance. This design also reduces infrastructure costs by eliminating the need to maintain separate OLTP and OLAP systems.



Hospital Diverts Ambulances Due to Ransomware Attack
The ransomware attack Monday impacted the enterprise IT infrastructure, including the electronic health records system, at Harrisonville, Mo.-based Cass Regional Medical Center, which includes 35 inpatient beds and several outpatient clinics, a spokeswoman tells Information Security Media Group. As of Wednesday morning, about 70 percent of Cass' affected systems were restored, she says. Except for diverting urgent stroke and trauma patients to other hospitals "out of precaution," Cass Regional has continued to provide inpatient and outpatient services for less urgent situations as it recovers from the attack, she says. "We've gone to our downtime processes," she says, which include resorting to the use of paper records while the hospital's Meditech EHR system is offline during the restoration and forensics investigation, she says. The hospital is working with an unnamed international computer forensics firm to decrypt data in its systems, she adds, declining to disclose the type of ransomware involved in the attack or whether the hospital paid a ransom to obtain a decryption key from extortionists.


Quote for the day:

"The mediocre leader tells. The good leader explains. The superior leader demonstrates. The great leader inspires." -- Gary Patton

Daily Tech Digest - July 11, 2018

Georgia Tech report outlines the future of smart cities

Georgia Tech report outlines the future of smart cities
One key point researchers made is that IoT deployed in public spaces – in collaboration between city governments, private enterprise and citizens themselves – has a diverse group of stakeholders to answer to. Citizens require transparency and rigorous security and privacy protections, in order to be assured that they can use the technology safely and have a clear understanding of the way their information can be used by the system. The research also drilled down into several specific use cases for smart city IoT, most of which revolve around engaging more directly with citizens. Municipal services management offerings, which allow residents to communicate directly with the city about their waste management or utility needs, were high on the list of potential use cases, along with management technology for the utilities themselves, letting cities manage the electrical grid and water system in a more centralized way. Public safety was another key use case – for example, the idea of using IoT sensors to provide more accurate information to first responders in case of emergency.



10 Tips for Managing Cloud Costs

Part of the reason why cost management is so challenging is because organizations are spending a lot of money on public cloud services. More than half of enterprises (52%) told RightScale that they spend more than $1.2 million per year on clouds services, and more than a quarter (26%) spend over $6 million. That spending will likely be much higher next year, as 71% of enterprises plan to increase cloud spending by at least 20%, while 20% expect to double their current cloud expenditures. Given those numbers, it's unsurprising that Gartner is forecasting that worldwide public cloud spending will "grow 21.4% in 2018 to total $186.4 billion, up from $153.5 billion in 2017." Another problem that contributes to cloud cost management challenges is the difficulty organizations have tracking and forecasting usage. The survey conducted for the SoftwareONE Managing and Understanding On-Premises and Cloud Spend report found that unpredictable budget costs was one of the biggest cloud management pain points for 37% of respondents, while 30% had difficulty with lack of transparency and visibility.


How to Receive a Clean SOC 2 Report

How to Receive a Clean SOC Report
Having a documented control matrix will be beneficial for more than just compliance initiatives; it becomes your source for how risk controls are developed and implemented and can be useful for augmenting corporate information security policies. For SOC 2, the control matrix becomes an important reference document for auditors. For instance, Trust Services Criteria 4 relate to monitoring of controls, so creating a list of how your organization is confirming controls are well designed and operating effectively makes it easy for auditors to validate that your stated controls are in place, designed to meet your security and confidentiality commitments, and are effective in doing so. Here is a concrete example: A control in your environment says servers need to be hardened to CIS benchmarks. How are you evaluating the effectiveness of this control? Are the servers hardened to your specification before going into production? Are they meeting benchmarks on an ongoing basis? An easy way to meet the monitoring requirement is to use a tool like Tripwire Enterprise.


Most Enterprise of Things initiatives are a waste of money

Most Enterprise of Things initiatives are a waste of money
What’s truly needed is a consolidated ability to capture and process all of the data and convert it into meaningful insights. Many companies provide analytics engines to do this (e.g., SAP, Google, Oracle, Microsoft, IBM, etc.). But to have truly meaningful company-wide analysis, a significantly more robust solution is needed than stand-alone, singular instances of business intelligence/analytics. How should companies enable the full benefits of EoT? They need a strategy that provides truly meaningful “actionable intelligence” from all of the various data sources, not just the 15 to 25 percent that is currently analyzed. That data must be integrated into a consolidated (although it may be distributed) data analysis engine that ties closely into corporate backend systems, such as ERP, sales and order processing, service management, etc. It’s only through a tightly integrated approach that the maximum benefits of EoT can be accomplished. Many current back-office vendors are attempting to make it easier for companies to accomplish this. Indeed, SAP is building a platform to integrate EoT data into its core ERP offerings with its Leonardo initiative.


Randy Shoup Discusses High Performing Teams

It is estimated that the intelligence produced by Bletchley Park, code-named "Ultra", ended the war two years early, and saved 14 million lives. ... Although the Bletchley Park work fell under the domain of the military, there was very little hierarchy, and the organisational style was open. The decryption was conducted using a pipeline approach, with separate "huts" (physical buildings on the campus) performing each stage of intercept, decryption, cataloguing and analysis, and dissemination. There was deep cross-functional collaboration within a hut, but extreme secrecy between each of them. There was a constant need for iteration and refinement of techniques to respond to newer Enigma machines and procedures, and even though the work was conducted under an environment of constant pressure the code-breakers were encouraged to take two-week research sabbaticals to improve methods and procedures. There was also a log book for anyone to propose improvements, and potential improvements were discussed every two weeks.


Intuit's CDO talks complex AI project to improve finances


The most obvious is a chat bot, but it could also provide augmented intelligence for our customer care representative; it could provide augmented intelligence for our accountants who are working on Intuit's behalf or private accountants who are using Intuit software. It could be deployed in internal processes where product teams learn how people interact with our software through a set of focus groups. So, it's one technology that could be instantiated across many different platforms and touchpoints. That's one of the exciting aspects from a technology perspective. If you think about how a human works, there are so many things that are amazing about humans, but one is that they have the ability to rapidly change contexts and rapidly deal with a changing environment. The touchpoint doesn't matter. It doesn't matter if you're talking on video, on the phone or in person. Generally speaking, people can deal with these channels of communication very easily. But it's hard for technology to do that. Technology tends to be built for a specific channel and optimized for that channel.


Ethereum is Built for Software Developers

In part, this is all thanks to what Ethereum has accomplished in a very short period. We give too much credit to Bitcoin’s price that skyrocketed approaching $20,000 in December, 2017, but the reality is in the code, and Ethereum is now what all dApp platforms compare themselves with, not the decade old Bitcoin model. As Ethereum solves the scalability problem, it will effectively untether itself from Bitcoin’s speculative price volatility. If Bitcoin is a bet, Ethereum is a sure thing. The main reason that is is because of the developer community it has attracted and the wide range of startups that use it especially in the early phases of their development. As TRON might find out, once they go independent they may have a more difficult time attracting software developers. Ethereum must take the piggy-back of ICOs of 2017 and be the open-source public distributed world operating system it was designed to be. It has massive potential to fill and in a crypto vacuum of hype and declining prices, Ethereum is perhaps the last chance before 2020, as the Chinese blockchains take over. The window is disappearing guys.


Software Flaws: Why Is Patching So Hard?

Software Flaws: Why Is Patching So Hard?
"As OCR states, identifying all vulnerabilities in software is not an easy process, particularly for the end user or consumer," says Mac McMillan, CEO of security consultancy CynergisTek. Among the most difficult vulnerabilities to identify and patch in healthcare environments "are those associated with software or devices of a clinical nature being used directly with patients," he says. "There are many issues that make this a challenge, including operational factors like having to take the system off line or out of production long enough to address security. Hospitals don't typically stop operations because a patch comes out. The more difficult problems are ones associated with the vulnerability in the software code itself, where a patch will not work, but a rewrite is necessary. When that occurs, the consumer is usually at a disadvantage." Fricke says applications that a vendor has not bothered to keep current are the trickiest to patch. "Some vendors may require the use of outdated operating systems or web browsers because their software has not been updated to be compatible with newer versions of operating systems or web browsers," he says.


5 security strategies that can cripple an organization

Security teams today have a two-faceted information problem: siloed data and a lack of knowledge. The first issue stems from the fact that many companies are only protecting a small percentage of their applications and, therefore, have a siloed view of the attacks coming their way. Most organizations prioritize sensitive, highly critical applications at the cost of lower tier apps, but hackers are increasingly targeting the latter and exploiting them for reconnaissance and often much more. It’s amazing how exposed many companies are via relatively innocuous tier 2 and legacy applications. The second, and more significant issue, can be summarized simply as, “you don’t know what you don’t know.” IT has visibility into straightforward metrics, but it often lacks insight into the sophistication of attempted breaches, how their risk compares to peers and the broader marketplace, and other trends and key details about incoming attack traffic. With visibility to only a small percentage of the attack surface, it’s very difficult to know whether the company is being targeted and exploited. Given the resource challenges noted above, it’s unrealistic to attempt to solve this problem with manpower alone.


What's the future of server virtualization?

Prior to server virtualization, enterprises dealt with server sprawl, with underutilized compute power, with soaring energy bills, with manual processes and with general inefficiency and inflexibility in their data-center environments. Server virtualization changed all that and has been widely adopted. In fact, it’s hard to find an enterprise today that isn’t already running most of its workloads in a VM environment. But, as we know, no technology is immune to being knocked off its perch by the next big thing. In the case of server virtualization, the next big thing is going small. Server virtualization took a physical device and sliced it up, allowing multiple operating systems and multiple full-blown applications to draw on the underlying compute power.  In the next wave of computing, developers are slicing applications into smaller microservices which run in lightweight containers, and also experimenting with serverless computing (also known as function-as-a-service (FaaS). In both of these scenarios, the VM is bypassed altogether and code runs on bare metal.



Quote for the day:


"The simple things are also the most extraordinary things, and only the wise can see them." -- Paulo Coelho


Daily Tech Digest - July 10, 2018

The value of visibility in your data centre

The value of visibility in your data centre image
Keeping key enterprise applications up and running well is an absolute requirement for modern business. As estimated by Gartner, IDC and others, the cost of IT downtime averages out to around £4,200 per minute. A simple infrastructure failure might cost around £75,000; while the failure of a critical, public-facing application costs more like £378,000 to £755,000 per hour. When failures impact large-scale global logistics and cause widespread inconvenience to customers, for example, last May’s, British Airways airline operations systems failure, costs can quickly become staggering. BA estimated losing $102.19 million USD (£77.08 million GBP) in hard costs including airfare refunds to stranded passengers, plus incalculable damage to reputation. BA’s parent company, IAG, subsequently lost $224 million USD (£170 million GBP) in value, based on its then-current stock valuation. Preventing such disasters, or intervening effectively and rapidly when they occur, means giving developers and operations staff (DevOps) visibility into IT infrastructure, networks, and applications.



Entrepreneurs think differently about risk


What is the worst thing that can happen? This is where a lot of people start and it’s why they don’t even bother evaluating the rest of it. A person who hates their job and doesn’t want to work for anyone again might shrug off becoming a freelancer because of the risk involved in quitting a 9–5, losing the steady paycheque and benefits, potentially not having clients and needing to find a new job. However I like to think “then what?” Will that kill me? No, it just means that you may need to find a new job. Plenty of people get laid off and need to find new jobs, that’s not the end of the world. If you’re worried that raising money for your startup will cause you to give away too much ownership in your company and that your investors will one day take control and oust you from the company, that’s a real fear. But even then, you still would own your shares in the company. Maybe if they oust you from the company it’s because you’re doing an abysmal job as CEO and they need someone who can grow the company. You still would own a big chunk of that company.


APT Trends Report Q2 2018


We also observed some relatively quiet groups coming back with new activity. A noteworthy example is LuckyMouse (also known as APT27 and Emissary Panda), which abused ISPs in Asia for waterhole attacks on high profile websites. We wrote about LuckyMouse targeting national data centers in June. We also discovered that LuckyMouse unleashed a new wave of activity targeting Asian governmental organizations just around the time they had gathered for a summit in China. Still, the most notable activity during this quarter is the VPNFilter campaign attributed by the FBI to the Sofacy and Sandworm (Black Energy) APT groups. The campaign targeted a large array of domestic networking hardware and storage solutions. It is even able to inject malware into traffic in order to infect computers behind the infected networking device. We have provided an analysis on the EXIF to C2 mechanism used by this malware. This campaign is one of the most relevant examples we have seen of how networking hardware has become a priority for sophisticated attackers. The data provided by our colleagues at Cisco Talos indicates this campaign was at a truly global level.


Organizations must act to safeguard 'the right to be forgotten'

The immediate need is clear—the capability to delete accounts and any associated personal data. But this is not as simple as it might first appear. Organizations are loath to give up data—it helps them improve their own business models, and quite frankly, it is profitable. One only need to look at the recent reselling of user information to third parties to realize its value.Enterprises, then, would need to be compelled to part with what it perceives as valuable—and governments are attempting this with legislation such as GDPR. Beyond the necessary business case, however, lie technological challenges. While many online services have built in deletion and removal options, lingering personal data is a different matter. If this personal information is located in an application or structured database, then the process is relatively straightforward—eliminate the associated account and its data is also removed. If the sensitive data is in files—detached from applications governed by the business—then they behave like abandoned satellites orbiting the earth, forever floating in the void of network-based file shares and cloud-based storage.


Big Data Is A Huge Boost To Emerging Telecom Markets

big data telecom
Big data in telecommunications is playing the biggest role by increasing the reach of major telecommunication brands in these markets. This is especially evident in Africa, where the telecommunications market growth has been the strongest. In 2004, only 6% of African consumers owned a mobile device. This figure has grown sharply over the past 14 years. There are now over 82 million mobile users throughout the continent. In some regions in Africa, the growth has been faster than even the most ambitious technology economists could have predicted. The number of people in Nigeria that own mobile devices has been doubling every year. Pairing big data and telecom has helped spur growth in the telecommunications industry in several ways. According to an analysis by NobelCom, this will likely lead to cheaper telephone calls between consumers in various parts of the world. Here are some of the biggest. A growing number of telecommunications providers are investing more resources trying to reach consumers throughout Africa and other emerging telecom markets.


Be smart about edge computing and cloud computing

Be smart about edge computing and cloud computing
Edge computing is a handy trick. It’s the ability to place processing and data retention at a system that’s closer to the target system it’s collecting data for as well as to provide autonomous processing. The architectural advantages are plenty, including not having to transmit all the data to the back-end systems—typical in the cloud—for processing. This reduces latency and can provide better security and reliability as well. But, and this is a big “but,” edge computing systems don’t stand alone. Indeed, they work with back-end systems to collect master data and provide deeper processing. This is how edge computing and cloud computing provide a single symbiotic solution. They are not, and will never be, mutually exclusive. Some best practices are emerging around edge computing that allow enterprises to provide better use of both platforms. ... The edge computing hype will drive confusion in the next few years. To avoid that confusion, you need to understand what roles each type of system plays, and you need to understand that very few technologies take over existing technologies.


Can Cybersecurity be Entrusted with AI?

Cybersecurity
While the technology can help to fill cybersecurity skill gaps but at the same time its a powerful tool for hackers as well. In short AI can act as guard and threat at same time. What matter is who use it for what purpose. At end It all depends upon Natural Intelligence to make good or bad use of Artificial Intelligence. There are paid and free tools available which can attempt to modify malwares to bypass machine learning antivirus software. Question is how to detect and stop? Cyberattacks like phishing and ransomeware are said to be much more effective when they are powered by AI. On the other hand to power up the behavioural patterns AI in particular is extremely good at recognizing patterns and anomalies. This makes it an excellent tool for threat hunting. Will AI be the bright future of security as the sheer volume of threats is becoming very difficult to track by humans alone. May be AI might come out as the most dark era, all depends upon Natural Intelligence. Natural Intelligence is needed to develop AI/machine learning tools. Despite popular belief, these technologies cannot replace humans (in my personal opinion). Using them requires human training and oversight.


How to Adopt a New Technology: Advice from Buoyant on Utilising a Service Mesh


Adopting technology and deploying it into production requires more than a simple snap of the fingers. Making the rollout successful and making real improvements is even tougher. When you’re looking at a new technology, such as service meshes, it is important to understand that the organizational challenges you’ll face are just as important as the technology. But there are clear steps you can take in order to navigate the road to production. To get started, it is important to identify what problems a service mesh will solve for you. Remember, it isn’t just about adopting technology. Once the service mesh is in production, there need to be real benefits. This is the foundation of your road to production. Once you’ve identified the problem that will be solved, it is time to go into sales mode. No matter how little the price tag is, it requires real investment to get a service mesh into production. The investment will be required by more than just you as well. Changes impact coworkers in ways that range from learning new technology to disruption of their mission critical tasks.


How Businesses Can Navigate the Ethics of Big Data

How Businesses Can Navigate the Ethics of Big Data
The laws regarding data protection and privacy differ from country to country all across the world. The EU has an authentic set of laws pertaining to this matter, but they are visibly different than what the United States has. Privacy within the EU is often said to be stronger than what it is in the U.S. Although the myths may exaggerate the difference, the EU is miles ahead of the U.S. when it comes to stringent data and privacy protection. Privacy is considered a fundamental right for all individuals living in the EU. Details about privacy and data protection are discussed as much as gun control in the U.S. The U.S. does have privacy protection problems, but the crux of the matter is that these laws are separate for both the governing bodies.  The diversity in laws concerned with data protection in numerous countries puts forward the notion that there is a need for globally-accepted norms that govern how privacy and protection are provided to users and their data. The globally accepted norms will set the standards and a pathway for others to follow when it comes to data protection.


Selling tech initiatives to the board: Eight success tips for IT leaders

Too many IT leaders, especially if they are busy running multiple projects, underestimate how much time it takes to put together a really persuasive presentation. Some of the most compelling presentations are those built around a demo of the technology being discussed or those with a strong video presentation that draws the audience into the topic. However, demos and videos aren't going to help if you don't have a clear and cogent message for board members who are charged with ensuring that the company is well run, is making the right kinds of investments, and is positioning itself for the future. If what you present doesn't check all of these boxes, it won't succeed. ... It's easy for a technology leader to get mired in tech talk and lose an audience. The board already knows that you know tech. What it wants to know is how well you understand the business and how tech can advance it. The best way to show them that you're focused on the business is to present a clear message in plain English and to avoid technology buzzwords and levels of detail that are extraneous to the business decision that has to be made.



Quote for the day:


"Growth happens when you fail and own it, not until. Everyone who blames stays the same." -- Dan Rockwell


Daily Tech Digest - July 08, 2018

Why Big Data and AI are the Next Digital Disruptions?

Big Data, Artificial Intelligence, AI, TechNews, tech news
Big data and Artificial Intelligence are two inextricably linked technologies, to the point that we can talk about Big Data Intelligence. Artificial Intelligence has become ubiquitous in companies in all industries where decision making is transformed by intelligent machines. The need for smarter decisions and Big Data management are the criteria that drive this trend. The convergence between Big Data and AI seems inevitable as the automation of smart decision-making becomes the next evolution of Big Data. Rising agility, smarter business processes and higher productivity are the most likely benefits of this convergence. The evolution of data management did not go smoothly. Much of the data is now stored on a computer, but there is still a lot of information on paper, despite the possibility of scanning paper information and storing it on disks or in databases. You just have to go to a hospital, an administration, a doctor’s office or any business to realize that a lot of information about customers, vendors, or products is still stored on paper. However, it is impossible to store terabytes of data produced by streaming video, text and images on paper.



Why today’s leaders need to know about the power of narratives

Effective narratives articulate the 'why' - a higher purpose or common goal that helps form a shared identity
Effective narratives are defined by two characteristics. Firstly, they articulate the "why" - a higher purpose or common goal that helps actors overcome vested interests and form a shared identity. The first line in Satoshi Nakamoto’s eminent white paper that launched Bitcoin describes how "a purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution". Secondly, effective narratives establish cause-effect relationships that form the basis for working towards this goal. Chinese electric vehicle manufacturers and Tesla are rivals in retail markets, but also partners in propagating the idea that electric passenger vehicles are the best means for lowering carbon emissions. Narratives interact with the real world in that actors combine normative beliefs (the "why") and positive beliefs (the "how") into decisions which result in perceived outcomes that potentially trigger a change of the narrative itself. As such, narratives are categorically different from stories. Stories are self-contained, whereas narratives are open-ended.


Announcing Microsoft Research Open Data – Datasets

The goal is to provide a simple platform to Microsoft researchers and collaborators to share datasets and related research technologies and tools. Microsoft Research Open Data is designed to simplify access to these datasets, facilitate collaboration between researchers using cloud-based resources and enable reproducibility of research. We will continue to shape and grow this repository and add features based on feedback from the community. We recognize that there are dozens of data repositories already in use by researchers and expect that the capabilities of this repository will augment existing efforts. ... Datasets in Microsoft Research Open Data are categorized by their primary research area, as shown in Figure 4. You can find links to research projects or publications with the dataset. You can browse available datasets and download them or copy them directly to an Azure subscription through an automated workflow. To the extent possible, the repository meets the highest standards for data sharing to ensure that datasets are findable, accessible, interoperable and reusable; the entire corpus does not contain personally identifiable information. The site will continue to evolve as we get feedback from users.


Augmented Reality in Manufacturing is Ready for Its Closeup

dhl-vision-picking-06 (1).png
Virtual/Augmented reality (VR and AR) — using technology to see something that literally is not there — is coming to a manufacturing facility near you. It's actually already there, but according to PwC, more than one in three manufacturers will implement virtual or augmented reality in manufacturing processes in 2018. Perhaps it will be something relatively simple, like what logistics giant DHL recently accomplished by introducing "Vision Picking," pilot programs of workers wearing smart glasses with visual displays of order picking instructions along with information on where items are located and where they need to be placed on a cart. The smartglasses freed pickers' hands of paper instructions and allowed them to work more efficiently and comfortably. ... "Digitalization is not just a vision or program for us at DHL Supply Chain, it's a reality for us and our customers, and is adding value to our operations on the ground," says Markus Voss, Chief Information Officer & Chief Operating Officer, DHL Supply Chain. 


The Golden Record: Explained


Where ‘big data’ appears to be the skeleton key that will unlock everything and all you want to know about your business, there’s more than meets the eye when it comes to understanding your data. Yes, clean data will unlock incredible value for your enterprise; inaccurate records, on the other hand, are a significant burden on our productivity. This is why we all seek the “Golden Record”. The Golden Record is the ultimate prize in the data world. A fundamental concept within Master Data Management (MDM) defined as the single source of truth; one data point that captures all the necessary information we need to know about a member, a resource, or an item in our catalogue – assumed to be 100% accurate. Its power is undeniable.  ... The complexity of implementing a Master Data Management solution stems from defining the workflow that will connect our disparate data sets. First, we have to identify every data source that feeds into the dataset. Then, we must consider which fields we find to be the most reliable depending on their source. Finally, we must define the criteria that will determine when the data from one source should overwrite conflicting data from a secondary source in our MDM system.



Some IoT experts, taking a practical view, think the only requirements at the end-points should be to deliver secure identity and no other complexity.  Amir Haleem, CEO of Helium, which is building a decentralized network of wide-range wireless protocol gateways and a token to connect edge IoT devices, said adding complexity to end devices"is like a gigantic hurdle to people actually building things." Apart from anything else, there's the cost. "People get very sensitive about the bill of materials (BoM) when you start talking at a scale of millions or tens of millions," said Haleem. "You start proposing like a 60 cent addition to a BoM and all of a sudden that's a meaningful number." Haleem said it makes no sense for end devices, like sensors that track and monitor medicine or food supply chains, to actively participate in a blockchain because these have to be power-efficient and cheap in an IoT setting. But delivering strong identity in the form of hardware-secured keys is essential, particularly in the face of recurring widespread vulnerabilities, botnets etc.


Can we have ethical artificial intelligence?

i-robot-film-still-main
“Generally, the idea that needs to be adopted by the industry is an ethical design right from the very start. So, it’s no longer useful just to have ethical approval of a system once it’s done and deployed – it has to be considered from the beginning and it has to be continuously considered.” It’s clear that the problem with intelligent machines is people. Without careful checks and balances, we could find ourselves using data that is inherently biased to feed machines which would themselves become biased. And without serious consideration and action, we might also find ourselves at the whim of corporations and governments. Francois Chollet, an artificial intelligence researcher in Google wrote in a recent blog post that AI poses a threat given the possibility of ‘highly effective, highly scalable manipulation of human behaviour.’ He also stated that continued digitization gives social media companies an ever-increasing insight into our minds, and ‘casts human behaviour as an optimization problem, as an AI problem: it becomes possible for [them] to iteratively tune their control vectors in order to achieve specific behaviours.’


The Answer to Disruptive Technology is “Education”

When disruptive technologies are addressed in education, they are usually considered in isolation. I increasingly come across discussions about “artificial intelligence,” “blockchain,” or “robots.” But the world is revolving more and more around these technologies working together. Disruptive technologies are accelerating each other’s development, creating new societal, economic, legal and commercial realities. For instance, disruptive digital technologies (operating together) are transforming the way business works. Instead of hierarchical and asset-heavy companies, we see flatter organizations/platforms with fewer assets and employees. Coordination of the assets and workers isn’t done by traditional managers, but digital technologies, sensors, and data analytics. Some even predict the end of the firm. ... Disruptors create growth by redefining performance that either brings a simple, cheap solution to the low end of a traditional market or enables “non-consumers” to solve pressing problems in their everyday lives. Employing “old world” ideas seems unlikely to work when pursuing the new.


8 Deep Learning Frameworks for Data Science Enthusiasts

The machine learning paradigm is continuously evolving. The key is to shift towards developing machine learning models that run on mobile in order to make applications smarter and far more intelligent. Deep learning is what makes solving complex problems possible. As put in ​this ​article, Deep Learning is basically Machine Learning on steroids. There are multiple layers to process features, and generally, each layer extracts some piece of valuable information. Given that deep learning is the key to executing tasks of a higher level of sophistication – building and deploying them successfully proves to be quite the Herculean challenge for data scientists and data engineers across the globe. Today, we have a myriad of frameworks at our disposal that allows us to develop tools that can offer a better level of abstraction along with the simplification of difficult programming challenges. Each framework is built in a different manner for different purposes. Here, we look at the 8 deep learning frameworks to give you a better idea of which framework will be the perfect fit or come handy in solving your business challenges.


Process Simulation with the Free DARL Online Service

Trading is about transferring funds from one financial instrument to another and back again, in such a way as to have more of the first financial instrument when you've finished. In this case, the two financial instruments we will use are the pound sterling and the dollar. I'm going to start with a simulated £10,000 and trade in and out of the dollar. ... This data contains a date and the open, close, high and low exchange rates for each day. We're going to simulate trading at the close. In the file, this value is called "price". When you trade anything through an exchange, or use a high street foreign exchange, there are two sources of cost. There's normally a transaction fee and there's a "spread" which is an offset to the central rate. These are the sources of guaranteed profit to the brokers. Our simulation will have values for both of these. Simulating trading will require us to respond to a trading signal and to buy whichever currency we are told, to calculate and subtract charges, and to keep track of the value of our holding. In trading parlance, in currency trading, you can be "long" or "short" a particular currency. If we are holding sterling, we are long sterling and short the dollar, if we are holding dollars we are short sterling and long dollar.



Quote for the day:


"Leaders think and talk about the solutions. Followers think and talk about the problems." -- Brian Tracy


Daily Tech Digest - July 07, 2018

The Re-Permissioning Dilemma Under GDPR

The Re-Permissioning Dilemma Under GDPR
There seems to be divergent opinions relating to the requirement to undertake re-permissioning of data subject consent under GDPR. Article 4(6) of GDPR makes it clear that if the basis for the collection of personally identifiable information is consent, as required under Article 6, then such consent must be “freely given, specific, informed and an unambiguous indication of the data subject’s wishes…by a clear affirmative action…”Accordingly, obtaining positive and affirmative consent is mandatory, otherwise data controllers and processors may be infringing upon data subject rights and may be subject to legal remedies, liabilities and penalties. However, Recital 171 of the GDPR appears to obviate the need to obtain positive data subject consent by affirming that “ Where processing is based on consent pursuant to Directive 95/46/EC, it is not necessary for the data subject to give his or her consent again if the manner in which the consent has been given is in line with the conditions of this Regulation…” This provision may give comfort to organizations who have made significant investments in building out their contact databases based on implied or opt-out consent as the basis for their collection.


Are We Experiencing a ‘Fintech Moment’?

Many banks – mostly the larger banks – invest large amounts of resources in digital technologies and have for the last decade. While smaller banks and credit unions lag in these investments, and attention on what is coming, their technology providers continue to focus on emerging technologies. The crux of the Kodak situation was the speed of the customer behavior shift, and their failure to adjust strategy accordingly. Digital technology has and continues to impact banking customers’ behavior and expectations as well. Luckily, the banking customers’ behavior has changed much slower than in the case of Kodak, allowing financial institutions to move at their speed thus far. Complacency, in this situation, should worry banking executives. The industry now lags in rethinking customer experiences compared to other industries. Technology innovators and early adopters move to fintech firms or the more tech-savvy banks. As the early majority begins this jump, banks will have to be ready to serve them.


What US FinTech can learn from UK FinTech?

Stricter regulations appear to be the biggest hurdle for US digital banks in growing and innovating. Many US digital banks are finding it simpler to partner with existing institutions, rather than forge their own path. Moven and Simple did just that and avoided getting their own charters. Varo Money followed this tactic too, but it has applied for its own bank charter in order to become the first app-based bank in the US. It is much more difficult to get a bank charter (the main banking licence) in the US. Bank charters are more complicated and expensive compared to the UK, where only one regulator has to issue it. In fact, US regulators haven’t approved a new bank charter in 10 years, effectively prohibiting any startups from acquiring one. Furthermore, different deposit insurance requirements benefit UK bank startups too as they are dynamic to the size of the bank. Another huge advantage for UK bank challengers is the implementation of “open banking.” Starting this past January, open banking has forced the UK’s nine largest banks to share their data with licensed startups (with account holder approval). 


Singapore Has Fintech Dreams, But It's Short on Tech Talent

“It would be great if the government can consider experimental schemes,” including a one-year employment pass for technology workers who are vetted by industry associations, “in order to address the immediate talent shortage gap,” he said. “This would be a balanced approach to protect jobs for locals, which should always be a priority, while still allowing Singapore to capture emerging global opportunities in the area of fintech and blockchain,” he said. Ravi Menon, managing director of the Monetary Authority of Singapore, who is spearheading the nation’s fintech efforts, said the talent crunch is a global problem and the biggest challenge for the city state’s burgeoning industry. The central bank is building capacity in the industry through upskilling programs, but that’s not enough. “We have to admit and acknowledge that there are some talents or skill sets we just don’t have and we have to remain open to foreign talent,” Menon said at a fintech event in May.


7 Features That Make Kubernetes Ideal For CI / CD

Kubernetes is a powerful, next generation, open-source platform for automating the deployment, scaling and management of application containers across clusters of hosts. It can run any workload. Kubernetes provides exceptional developer user experience (UX), and the rate of innovation is phenomenal. From the start, Kubernetes’ infrastructure promised to enable organizations to deploy applications rapidly at scale and roll out new features easily while using only the resources needed. With Kubernetes, organizations can have their own Heroku running in their own Google Cloud, AWS or on-premises environment. In years past, think about how often development teams wanted visibility into operations deployments. Developers and operations teams have always been nervous about deployments because maintenance windows had a tendency to expand, causing downtime. Operations teams, in turn, have traditionally guarded their territory so no one would interfere with their ability to get their job done. Then containerization and Kubernetes came along, and software engineers wanted to learn about it and use it. It’s revolutionary.


Memo to the CFO: Get in front of digital finance—or get left back

Digitization is now a realistic goal for the finance function because of a range of technological advances. These include the widespread availability of business data; teams’ ability to process large sets of data using now-accessible algorithms and analytic methods; and improvements in connectivity tools and platforms, such as sensors and cloud computing. CFOs and their teams are the gatekeepers for the critical data required to generate forecasts and support senior leaders’ strategic plans and decisions—among them, data relating to sales, order fulfillment, supply chains, customer demand, and business performance as well as real-time industry and market statistics. ... CFOs may decide to champion and pursue investments in one or all of these areas. Much will depend on the company’s starting point—its current strategies, needs, and capabilities and its existing technologies and skill sets. It is important to note that digital transformation will not happen all at once, and companies should not use their legacy enterprise resource planning and other backbone systems as excuses not to start the change.


Make the Leap from IT 'Pro' to IT 'Manager'

Image: Pixabay
In many technology-focused jobs, employees don’t necessarily need to have a complete grasp regarding what the business does, how it operates or what direction it is going in. Yet, if you really want to make your mark in IT management, having a thorough understanding of the inner-workings of the business is crucial. Therefore, when time permits, make sure you read and fully understand the organization's mission statement. Talk to other managers and get their opinions on the current and future state of the company. Then armed with that knowledge, begin applying it toward your tech-specific role. Doing so will help you to better communicate with end users about why they need specific apps and how things like digital transformation of the business impacts their ability to work. ... In traditional enterprise organizations, an IT department is broken into multiple teams. For example, teams commonly are split into service desk, infrastructure, DevOps, database and data security, to name a few. If you want to stand out as a leader in the IT department, a good place to start is within your specific team.


Cyber resilience in Scotland: combating cyber crime

The Scottish government unveiled its cyber resilience strategy in 2015, with the aim of helping Scotland’s people, businesses and public sector improve their ability to use technology securely, and understand and address cyber crime. It launched more detailed cyber resilience plans for the public sector in November 2017, and the private and third sectors in June 2018. ... Part of this investment involves supporting the wider adoption of the Cyber Essentials scheme, with the aim of “at least [doubling] the number of organisations across the public, private and third sectors holding Cyber Essentials or Cyber Essentials Plus certification in Scotland during Financial Year 18-19”. All Scottish public-sector bodies are expected to achieve certification to the Cyber Essentials scheme by the end of October 2018. Cyber Essentials was developed by the UK government to provide five cyber security controls that all organisations can implement to achieve a baseline of cyber security


Ericsson UDN, Mode Move Needle


Managing multiple networks on a global scale is no small task. And while the reliability and security of MPLS is certainly comforting, it is quite rigid. Service providers and enterprises can leverage this private core network to avoid scaling issues as well the costly and unbending nature of antiquated edge solutions. Marcus Bergström, VP and GM Ericsson UDN, noted “We’re proud to have discovered and on-boarded Mode to help add value immediately to our existing customer base. Mode saw immediate value and scale of running its breakthrough routing technology on our modern edge compute platform that is built in partnership with service providers globally.” In specific, Ericsson UDN (Unified Delivery Network) is now melded with Mode HALO to reinvigorate the private core network. Mode’s routing algorithms play a central role in the cloud service solution. The self-service solution enables migration into the modern age of performance, while introducing flexibility and a reduction in cost.


Management AI: Types Of Machine Learning Systems


“Hopefully” is the key difference between a heuristic and a deterministic algorithm. When deterministic algorithms are run, you will get a solution. Heuristics are “rules of thumb”: rules that can make predictions but do not have certainty. Heuristic algorithms often use probabilistic methods during applications of rules and providing results. Two terms relevant to expert systems are forward chaining and backward chaining. Forwardchaining starts from evidence and drives to a conclusion. Backward chaining starts with a conclusion and then checks to see if the evidence supports that conclusion. Think about cause and effect. ... Analytics as ML is a sensitive and controversial idea to many. As mentioned in an earlier article, machine learning is moving past a purely AI origin. In the last decade, business intelligence (BI) has increasingly incorporated more advanced analytics. BI analytics include deterministic algorithms that process massive amounts of data to identify patterns as well as make predictive and prescriptive suggestions. Those algorithms are the foundation of analytics used in the BI sector.



Quote for the day:


"Leaders don't inflict pain. They bear pain." -- Max DePree