Daily Tech Digest - June 28, 2019

MIT: We're building on Julia programming language to open up AI coding to novices

The system allows coders to create a program that, for example, can infer 3-D body poses and therefore simplify computer vision tasks for use in self-driving cars, gesture-based computing, and augmented reality. It combines graphics rendering, deep-learning, and types of probability simulations in a way that improves a probabilistic programming system that MIT developed in 2015 after being granted funds from a 2013 Defense Advanced Research Projects Agency (DARPA) AI program. The idea behind the DARPA program was to lower the barrier to building machine-learning algorithms for things like autonomous systems. "One motivation of this work is to make automated AI more accessible to people with less expertise in computer science or math," says lead author of the paper, Marco Cusumano-Towner, a PhD student in the Department of Electrical Engineering and Computer Science.  "We also want to increase productivity, which means making it easier for experts to rapidly iterate and prototype their AI systems."


Distributed Agile teams can help global enterprises reach their deployment and cost-reduction goals. Distributed teams reduce the overhead costs for an organization, and let it build a bigger pool of talented people than if the organization eschewed remote job candidates. In essence, location independence makes an organization much more agile and productive. However, these global teams must address some inherent collaboration challenges, such as differences in time zones, cultures and language barriers. For a distributed Agile team to succeed, each worker must make some extra effort. Project managers should strive to: arm the team with the right tools for communication and collaboration; understand personnel strengths and weaknesses; encourage transparency; hold regular meetings; set clear expectations for stakeholders and team members; adhere to engineering best practices and standards; focus on achievable milestones; and build awareness of different cultures. Let's look at some of the best practices that can help distributed Agile teams address these specific challenges.


Certain Insulin Pumps Recalled Due to Cybersecurity Issues

Certain Insulin Pumps Recalled Due to Cybersecurity Issues
In a statement, the FDA says it is warning patients and healthcare providers that certain Medtronic MiniMed insulin pumps have potential cybersecurity risks. "Patients with diabetes using these models should switch their insulin pump to models that are better equipped to protect against these potential risks," the FDA says. The potential risks are related to the wireless communication between Medtronic's MiniMed insulin pumps and other devices such as blood glucose meters, continuous glucose monitoring systems, the remote controller and CareLink USB device used with these pumps, the FDA warns. "The FDA is concerned that, due to cybersecurity vulnerabilities identified in the device, someone other than a patient, caregiver or healthcare provider could potentially connect wirelessly to a nearby MiniMed insulin pump and change the pump's settings. This could allow a person to over deliver insulin to a patient, leading to low blood sugar (hypoglycemia), or to stop insulin delivery, leading to high blood sugar and diabetic ketoacidosis (a buildup of acids in the blood)," the agency's statement says.


Determining data value by measuring return on effort

I'm always very encouraged when professionals from different architectural disciplines can converge on common ground. This can be rare event, so when it does happen, I like to call it out. Such an event has happened recently with a contact coming from the business architecture discipline, namely Robert DuWors, with us both trying to put some metrics around the measurement of data value in our respective areas of expertise. I believe that business architecture and information architecture are the two core pillars of the architecture of an enterprise. But practitioners of these interconnected disciplines can frequently rub badly against each other, each side devaluing the other's methods and approaches. So, to reach agreement across the two on what constitutes enterprise value of our efforts is a happy place to be. ...and together, we agreed that this represents the value of a specific item of data to the enterprise from both information and business perspectives. Now, of course, this may be refined over time, but it already contains most of the aspects that together, Robert and I believe are key to this metric... So, what does this equation gives us? It's in 3 major sections, which I will call ‘Horizons.’


SoftBank plans drone-delivered IoT and internet by 2023

SoftBank plans drone-delivered IoT and internet by 2023
Why the stratosphere? Well, one reason is that lower altitudes often have strong winds to deal with, including straight after storms. The companies say disaster communications could be a primary use case for the drones, and the stratosphere has a steady current. Also, because of the altitude, LTE and 5G coverage could be much more widespread than any alternative delivery mechanism implemented at a lower altitude. One High Altitude PlatformStation (HAPS), as the HAWK30’s genre of base stations are called, could provide service over about 125 miles in diameter, and about 40 HAPS could cover the entire Japanese archipelago. Whereas a set of older, tethered balloons (SoftBank developed one in 2011) might cover just six miles, SoftBank says. Others are aiming for the stratosphere, too. Newer balloons, such as Alphabet’s Loon, using tennis court-sized balloons also fly there. Softbank is a major provider of telecommunications in Japan, a country on the Pacific rim and prone to earthquakes. It is thus keen to find backup alternatives to wired, or even radio-based, ground assets that can get destroyed in natural disasters.


Five Facts on Fintech


From artificial intelligence to mobile applications, technology helps to increase your access to secure and efficient financial products and services. Since fintech offers the chance to boost economic growth and expand financial inclusion in all countries, the IMF and World Bank surveyed central banks, finance ministries, and other relevant agencies in 189 countries on a range of topics and received 96 responses. A new paper details the results of the survey alongside findings from other regional studies, and also identifies areas for international cooperation—including roles for the IMF and World Bank—and in which further work is needed by governments, international organizations, and standard-setting bodies. ... Awareness of cyber risks is high across countries and most jurisdictions have frameworks in place to protect financial systems. Most jurisdictions—79% of those with higher incomes according to the survey results—identified cyber risks in fintech as a problem for the financial sector.


The Windows 10 security guide: How to safeguard your business


Using the Windows Update for Business features built into Windows 10 Pro, Enterprise, and Education editions, you can defer installation of quality updates by up to 30 days. You can also delay feature updates by as much as two years, depending on the edition. Deferring quality updates by seven to 15 days is a low-risk way of avoiding the risk of a flawed update that can cause stability or compatibility problems. You can adjust Windows Update for Business settings on individual PCs using the controls in Settings > Update & Security > Advanced Options. In larger organizations, administrators can apply Windows Update for Business settings using Group Policy or mobile device management (MDM) software. You can also administer updates centrally by using a management tool such as System Center Configuration Manager or Windows Server Update Services. Finally, your software update strategy shouldn't stop at Windows itself. Make sure that updates for Windows applications, including Microsoft Office and Adobe applications, are installed automatically.


Use event processing to achieve microservices state management


Unfortunately, it's often confusing whether a process close to the event source actually maintains the state itself, or whether that state is somehow provided from outside the service. For this reason, it's essential to create unique transaction IDs for all state-dependent operations. You can use that ID to retrieve specific state information from a database or to drive transaction-specific orchestration. This ID also helps carry data from one phase of a transaction, or flow, to another. This setup is essentially a back-end approach to microservices state management. Front-end state management is another option. Control from the front end means that a user or edge process maintains the state data. This state information passes along through an event flow, and each successive process accumulates more information from the previous ones. Since this state information queues along with the events, you won't lose state data during a failure as long as the event stays intact. Also, when processes get scaled with more instances of the microservices involved, the event flow can provide the state data those processes need.


How to build disruptive strategic flywheels


Capabilities-driven strategy suggests that companies that have a clear way to play (WTP) that aligns with market demands, and that invest in a system of four to six differentiating capabilities that enable the company to excel at the WTP, are better positioned for success. But increasing clock speed changes the calculation. Today, the half-life of a competitive advantage may be fleeting. As industries are disrupted, players that have been successful within the context of one business cycle might need to rethink their differentiating capabilities, their investment portfolios, and possibly even their WTP more frequently and dynamically. Ford no longer just makes cars; it focuses instead on mobility solutions. Big oil companies are investing in renewable energy as a hedge against constraints on emissions. Amazon is competing with…everyone. As a result, it behooves organizations and managers to continually assess competitive moves, regulatory and technology evolution, and consumer preferences — and to adapt decisions in a dynamic fashion.


Smart Lock Turns Out to be Not So Smart, or Secure


Researchers are warning a keyless smart door lock made by U-tec, called Ultraloq, could allow attackers to track down where the device is being used and easily pick the lock – either virtually or physically. Ultraloq is a Bluetooth fingerprint and touchscreen door lock sold for about $200. It allows a user to use either fingerprints or a PIN for local access to a building. Ultraloq also has an app that can be used locally or remotely for access. When Pen Test Partners, with help from researchers identified as @evstykas and @cybergibbons, took a closer look they found Ultraloq was riddled with vulnerabilities. For starters, researchers found that the application programming interface (API) used by the mobile app leaked enough personal data from the user account to determine the physical address where the Ultraloq device was being used. ... API has no authentication at all,” researchers wrote. “The data is obfuscated by being base64 twice but decoding it exposes that the server side has no authentication or authorization logic. This leads to an attacker being able to get data and impersonate all the users,” researchers wrote.



Quote for the day:

"Humility is a great quality of leadership which derives respect and not just fear or hatred." -- Yousef Munayyer

Daily Tech Digest - June 27, 2019

Tracking down library injections on Linux

Tracking down library injections on Linux
The linux-vdso.so.1 file (which may have a different name on some systems) is one that the kernel automatically maps into the address space of every process. Its job is to find and locate other shared libraries that the process requires. One way that this library-loading mechanism is exploited is through the use of an environment variable called LD_PRELOAD. As Jaime Blasco explains in his research, "LD_PRELOAD is the easiest and most popular way to load a shared library in a process at startup. This environmental variable can be configured with a path to the shared library to be loaded before any other shared object." ... Note that the LD_PRELOAD environment variable is at times used legitimately. Various security monitoring tools, for example, could use it, as might developers while they are troubleshooting, debugging or doing performance analysis. However, its use is still quite uncommon and should be viewed with some suspicion. It's also worth noting that osquery can be used interactively or be run as a daemon (osqueryd) for scheduled queries. See the reference at the bottom of this post for more on this.



Responsible Data Management: Balancing Utility With Risks

To mitigate risks relating to data sharing, good protocols for information exchange need to be in place. Currently these exist bilaterally between certain organisations, but these should extend to apply multilaterally, to an entire sector or to an entire response to maximise impact. Another way to improve inter-agency data sharing is to use contemporary cryptographic solutions, which allows for data usage without giving up data governance. In other words, one organisation can run analyses on another organisation’s data and get aggregate outputs, without ever accessing the data directly.  There are a number of other data-management practices that can reduce the risks of the data falling into the wrong hands, such as ensuring that all computers in the field are password protected, and have firewalls and up-to-date antivirus software, operating systems and browsers. Additionally, the data files themselves should be encrypted. There are open-source programs that solve all of these tasks, so addressing them may be a matter of competence inside organisations rather than funding.


Insurer: Breach Undetected for Nine Years

Insurer: Breach Undetected for Nine Years
But despite the common challenges in detecting data breaches, the nine-year lag time at Dominion National is unusually high, some experts note. "Dominion National's notification of a breach nine years after the unauthorized access may be an unenviable record for detection," says Hewitt of CynergisTek. "This is unusual because it strongly suggests that they may not have been performing comprehensive security audits or performing system activity reviews." Tom Walsh, president of the consultancy tw-Security, notes: "I am surprised that they detected it dating that far back. Most organizations do not retain audit logs or event logs for that long. "Most disturbing is that an intruder or a malicious program or code could be into the systems and not previously detected. Nine years is beyond the normal refresh lifecycle for most servers. I would have thought that it could have been detected during an upgrade or a refresh of the hardware." Walsh adds that it is still unclear whether the incident is reportable under the HIPAA Breach Notification Rule. "They were careful in stating that there is no evidence to indicate that data was even accessed," he notes.


Going Beyond GDPR to Protect Customer Data

GDPR was something of a superstar in 2018. Searches on the regulation hit Beyoncé and Kardashian territory periodically throughout the year. In the United States, individual states began either exploring their own version of the GDPR or, in the case of California, enacting their own regulations. Other states that either enacted or strengthened existing data governance laws similar to the GDPR include Alabama, Arizona, Colorado, Iowa, Louisiana, Nebraska, Oregon, South Carolina, South Dakota, Vermont and Virginia. At this point, there is also a growing number of companies operating outside the EU that are ceasing operations with the EEA rather than taking on expensive changes to their business applications and practices and becoming subject to possible fines assessments. GDPR prosecutions continue, as do the filing of complaints and investigations. Each member country has its own listing of court cases in progress, so it’s a bit difficult to quantify just how many investigations and cases are active. 


Juniper’s Mist adds WiFi 6, AI-based cloud services to enterprise edge

wifi cloud wireless
“Mist's AI-driven Wi-Fi provides guest access, network management, policy applications and a virtual network assistant as well as analytics, IoT segmentation, and behavioral analysis at scale,” Gartner stated. “Mist offers a new and unique approach to high-accuracy location services through a cloud-based machine-learning engine that uses Wi-Fi and Bluetooth Low Energy (BLE)-based signals from its multielement directional-antenna access points. The same platform can be used for Real Time Location System (RTLS) usage scenarios, static or zonal applications, and engagement use cases like wayfinding and proximity notifications.” Juniper bought Mist in March for $405 million for this AI-based WIFI technology. For Juniper the Mist buy was significant as it had depended on agreements with partners such as Aerohive and Aruba to deliver wireless, according to Gartner.  Mist, too, has partners and recently announced joint product development with VMware that integrates Mist WLAN technology and VMware’s VeloCloud-based NSX SD-WAN. “Mist has focused on large enterprises and has won some very well known brands,” said Chris Depuy


DevSecOps Keys to Success

The most important elements of a successful DevSecOps implementation are automation and collaboration. 1) With DevSecOps, the goal is to embed security early on into every phase of the development/deployment lifecycle. By designing a strategy with automation in mind, security is no longer an afterthought; instead, it becomes part of the process from the beginning. This ensures security is ingrained at the speed and agility of DevOps without slowing business outcomes. 2) Similar to DevOps where there is close alignment between developers and technology operations engineers, collaboration is crucial in DevSecOps. Rather than considering security to be “someone else’s job,” developers, technology operations and security teams all work together on a common goal. By collaborating around shared goals, DevSecOps teams make informed decisions in a workflow where there is the biggest context around how changes will impact production and the least business impact to take corrective action.


Microsoft beefs up OneDrive security

Microsoft > OneDrive [Office 365]
The new feature - dubbed OneDrive Personal Vault - was trumpeted as a special protected partition of OneDrive where users could lock their "most sensitive and important files." They would access that area only after a second step of identity verification, ranging from a fingerprint or face scan to a self-made PIN, a one-time code texted to the user's smartphone or the use of the Microsoft Authenticator mobile app.  The idea behind OneDrive Personal Vault, said Seth Patton, general manager for Microsoft 365, is to create a failsafe so that "in the event that someone gains access to your account or your device," the files within the vault would remain sacrosanct. Access to the vault will also be on a timer, Patton said, that locks the partition after a user-set period of inactivity. Files opened from the vault will also close when the timer expires. As the feature's name implied, the vault is only for OneDrive Personal, the consumer-grade storage service, not for the OneDrive for Business available to commercial customers. Although OneDrive Personal is a free service - albeit with a puny 5GB of storage - many come to it from the Office 365 subscription service.


Disaster planning: How to expect the unexpected


Larger organisations will typically have the advantages of a greater resource reserve and multiple premises in different regions, but they can be slow to react and their communications can struggle. Smaller organisations can be much more adaptable and swifter to react, but rarely have resources in reserve and are usually based in one fixed location. As with all things, preparation is key, and therefore it is worth taking time to prepare business continuity strategies and disaster plans. Rather than being scenario-specific – having dedicated plans for different eventualities – organisations should take an agnostic approach with their business continuity strategies. “If your business recovery plan is written strictly for recovery, regardless of scenario, then you will be in the best shape, as you will know that whatever happens, the plan has been tested,” says Dan Johnson, director of global business continuity and disaster recovery at Ensono. “You will go to your backup procedures that keep your daily processes moving and make sure business is flowing.”


An IoT maturity model and tips for IoT deployment success


Nemertes compiled these results into an IoT maturity model that companies can use as their roadmap to success (see figure). The maturity model comprises four levels: unprepared -- the organization lacks tools and processes to address the IoT initiative; reactive -- the organization has platforms and processes to respond to but not proactively address the issue; proactive -- the organization has the tools and processes to proactively deliver on the issue as it is currently understood; and anticipatory -- the organization has the tools, processes and people to handle future requirements. The third of organizations in the survey with successful IoT deployments were likely to be at Level 2 or Level 3 IoT maturity, and in a couple of areas -- namely executive sponsorship, budgeting and architecture -- successful organizations far outshone organizations that were less successful. ... In addition, key IoT team members include the IoT architect, IoT security architect, IoT network architect and IoT integration architect. Most large organizations have different architects that encompass these responsibilities, though their job titles may not reflect it.


AI presents host of ethical challenges for healthcare

With respect to ethics, she observed that the massive volumes of health data being leveraged by AI must be carefully protected to preserve privacy. “The sheer volume, variability and sensitive nature of the personal data being collected require newer, extensive, secure and sustainable computational infrastructure and algorithms,” according to Tourassi’s testimony. She also told lawmakers that data ownership and use when it comes to AI continues to be a sensitive issue that must be addressed. “The line between research use and commercial use is blurry,” said Tourassi. To maintain a strong ethical AI framework, Tourassi believes fundamental questions need to be answered such as: Who owns the intellectual property of data-driven AI algorithms in healthcare? The patient or the medical center collecting the data by providing the health services? Or the AI developer? “We need a federally coordinated conversation involving not only the STEM sciences but also social sciences, economics, law, public policy and patient advocacy stakeholders” to “address the emerging domain-specific complexities of AI use,” she added



Quote for the day:


"Nobody in your organization will be able to sustain a level of motivation higher than you have as their leader." -- Danny Cox


Daily Tech Digest - June 26, 2019

MongoDB CEO tells hard truths about commercial open source

opensourceistock-485587762boygovideo.jpg
As ugly as that sentiment may seem, it's (mostly) true. Not completely, because MongoDB has had some external contributions. For example, Justin Dearing responded to Ittycheria's claim thus: "As someone that has made a (very tiny) contribution to the [MongoDB] server source code, this is kind of insulting to hear [it] said this way." There's also the inconvenient truth that part of MongoDB's popularity has been the broad array of drivers available. While the company writes the primary drivers used with MongoDB, the company relies on third-party developers to pick up the slack on lesser-used drivers. Those drivers, though less used, still contribute to the overall value of MongoDB. But it's largely true, all the same. And it's probably even more true of all the other open source companies that have been lining up to complain about public clouds like AWS "stealing" their code. None of these companies is looking for code contributions. Not really. When AWS, for example, has tried to commit code, they've been rebuffed.



Sen. Wyden Asks NIST to Develop Secure File Sharing Standards
Wyden also recommends implementing new technology and better training for government workers to help ensure that sensitive documents can be sent securely with better encryption. "Many people incorrectly believe that password-protected .zip files can protect sensitive data," Wyden writes in the letter. "Indeed, many password-protected .zip files can be easily broken with off-the-shelf hacking tool. This is because many of the software programs that create .zip files use a weak encryption algorithm by default. While secure methods to protect and share data exist and are feely available, many people do not know which software they should use." Wyden notes that the increasing number of data breaches, as well as nation-state attacks, point to the need to develop new standards, protocols and guidelines to ensure that sensitive files are encrypted and can be securely shared. He also asked NIST to develop easy-to-use instructions so that the public to take advantage of newer technologies. A spokesperson for NIST tells Information Security Media Group that the agency is reviewing Wyden's letter and will provide a response to the senator's concerns and questions.


Q&A on the Book Empathy at Work


Emotional empathy is inherent in us; when we see someone laughing, we smile. When we see someone crying, we feel sad. Cognitive empathy is understanding what a person is thinking or feeling; this one is often referred to as “perspective taking” because we are actively engaged in attempting to “get” where the person is coming from. Empathic concern is being so moved by what another person is going through that we are empowered to act. The majority of the time when people are talking about empathy, they are referring to empathic concern.  These definitions of empathy are all accurate and informative. But a big point that I always try to make is that empathy is a verb; it’s a muscle that must be worked consistently for any real change to occur. That’s why everyone’s definition of what empathy is in their own lives is going to be a little bit different. We all feel understand in a different way, so each person truly has to define what empathy looks and feels like for themselves. For me, it’s allowing me to finish my thoughts. I’m a stutterer, and it sometimes takes me a bit to get a word or a thought out.


A Developer's Journey into AI and Machine Learning

There are challenges. Microsoft really has to sell developers and data engineers that data science, AI and ML is not some big, scary, hyper-nerd technology. There are corners where that is true, but this field is open to anyone who's willing to learn. And Microsoft is certainly doing its part with streamlined services and tools like Cognitive Services and ML.NET. End of the day, anyone who is already a developer/engineer clearly has the core capabilities to be successful in this field. All people need to do is level up some of their skills and add new ones. In some cases, people will need to unlearn what they've already learned, particularly around the field of certainty. The way I like to put it, a DBA (database admin) will always give a precise answer. Inaccurate maybe, but never imprecise. "There are 864,782 records in table X," for example. But a data science/ML/AI practitioner deals with probabilities. "There's a 86.568% chance there's a cat in this picture." It's a change of mindset as much as a change in technologies.


Break free from traditional network security


While it can be argued that perimeterless network security will become essential to keep the wheels of commerce turning, Simon Persin, director at Turnkey Consulting, says: “A lack of network perimeters needs to be matched with technology that can prevent damage.” In a perimeterless network architecture, the design and behaviour of the network infrastructure should aim to prevent IT assets being exposed to threats such as rogue code. Persin says that by understanding which protocols are allowed to run on the network, an SDN can allow people to perform the legitimate tasks required by their role. Within a network architecture, a software-defined network separates the forwarding and control planes. Paddy Francis, chief technology officer for Airbus CyberSecurity, says this means routers essentially become basic switches, forwarding network traffic in accordance with rules defined by a central controller. What this means from a monitoring perspective, says Francis, is that packet-by-packet statistics can be sent back to the central controller from each forwarding element.


The Unreasonable Effectiveness of Software Analytics

Software analytics distills large amounts of low-value data into small chunks of very-high-value data. Such chunks are often predictive; that is, they can offer a somewhat accurate prediction about some quality attribute of future projects—for example, the location of potential defects or the development cost. In theory, software analytics shouldn’t work because software project behavior shouldn’t be predictable. Consider the wide, ever-changing range of tasks being implemented by software and the diverse, continually evolving tools used for software’s construction (for example, IDEs and version control tools). Let’s make that worse. Now consider the constantly changing platforms on which the software executes (desktops, laptops, mobile devices, RESTful services, and so on) or the system developers’ varying skills and experience. Given all that complex and continual variability, every software project could be unique. And, if that were true, any lesson learned from past projects would have limited applicability for future projects.


Robots can now decode the cryptic language of central bankers


But robots aren’t that smart yet, according to Dirk Schumacher, a Frankfurt-based economist at French lender Natixis SA, which this month started publishing an automated sentiment index of European Central Bank meeting statements. “The question is how intelligent it can become,” he said. “Maybe in a few years time we’ll have algorithms which get everything right, but at this stage I find it a nice crosscheck to verify one’s own assessments.” The main edge humans still have over machines is being able to read and understand ambiguity, Schumacher said. While Natixis’ system can quantify how optimistic or pessimistic ECB policy makers are looking at word choice and intensity, it can’t discern if a policy maker said something ironic — although arguably not all humans could either. “It’s not a perfect science and it’s hard to see that humans will be replaced by these methods anytime soon,” said Elisabetta Basilico, an investment adviser who writes about quantitative finance. Prattle, which was recently acquired by Liquidnet, claims its software accurately predicts G10 interest rate moves 9.7 times out of 10.


Error-Resilient Server Ecosystems for Edge and Cloud Datacenters

Realizing our proposed errorresilient, energy-efficient ecosystem faces many challenges, in part because it requires the design of new technologies and the adoption of a system operation philosophy that departs from the current pessimistic one. The UniServer Consortium (www .uniserver2020.eu)—consisting of academic institutions and leading companies such as AppliedMicro Circuits, ARM, and IBM—is working toward such a vision. Its goal is the development of a universal system architecture and software ecosystem for servers used for cloud- and edge-based datacenters. The European Community’s Horizon 2020 research program is funding UniServer (grant no. 688540). The consortium is already implementing our proposed ecosystem in a state-of-the art X-Gene2 eight-core, ARMv8-based microserver with 28- nm feature sizes. The initial characterization of the server’s processing cores shows that there is a significant safety margin in the supply voltage used to operate each core. Results show that the some cores could use 10 percent below the nominal supply voltage that the manufacturer advises. This could lead to a 38 percent power savings.


What is edge computing, and how can you get started?


Edge computing architecture is a modernized version of data center and cloud architectures with the enhanced efficiency of having applications and data closer to sources, according to Andrew Froehlich, president of West Gate Networks. Edge computing also seeks to eliminate bandwidth and throughput issues caused by the distance between users and applications. Edge computing is not the same as the network edge, which is more similar to a town line. A network edge is one or more boundaries within a network to divide the enterprise-owned and third-party-operated parts of the network, Froehlich said. This distinction enables IT teams to designate control of network equipment. Edge computing's ability to bring compute and storage data in or near enterprise branches is attractive to those who require quick response times and support for large data amounts. Edge computing can bring several benefits to enterprise networks with centralized management, lights-out operations and cloud-style infrastructure, according to John Burke


Cloudflare Criticizes Verizon Over Internet Outage


Cloudflare put the blame squarely on Verizon for not adequately filtering erroneous routes announced by an ISP, DQE Communications, in Pennsylvania. It pulled no punches, saying there was no good reason for Verizon's failure other than "sloppiness or laziness." "The leak should have stopped at Verizon," writes Tom Strickx, a Cloudflare network software engineer, in the blog post. "However, against numerous best practices outlined below, Verizon's lack of filtering turned this into a major incident that affected many Internet services such as Amazon, Linode and Cloudflare." DQE used a BGP optimizer, which allows for more specific BGP routes, Strickx writes. Those more specific routes trump more general ones in announcements. DQE announced the routes to one of its customers, Allegheny Technologies, a metals manufacturing company. Then, those routes went to Verizon. To be fair, the ultimate responsibility falls on DQE for announcing the wrong routes. Allegheny is somewhat to blame for pushing those routes on. But then Verizon - one of the largest transit providers in the world - propagated the routes.



Quote for the day:


"Defeat is not the worst of failures. Not to have tried is the true failure." -- George Woodberry


Daily Tech Digest - June 25, 2019

AI in IoT elevates data analysis to the next level


In a typical enterprise network, IoT exists beyond the boundaries of the cloud, passing data back through a firewall, where it then takes residence in storage and is made available to some process or application. But with so many different devices reporting -- a number that will steadily increase -- traffic problems are inevitable. Managing the flow of so much data from so many endpoints is beyond the resources of most companies. But wait; it gets worse. Many IoT applications are two-way streets, where data gathered by sensors has consequences in the locations they're reporting on; for example, adjusting the power consumption of a building based on changes in occupancy and weather. In many such cases, there's no time for data to make a round trip to a cloud. ... The fix for these problems is edge computing -- extending the processing power of an enterprise network by adding gateways and IoT devices that offer local processing power.


.NET Core: Past, Present, and Future


The highlight of the .NET Core 3.0 announcement was the support for Windows desktop applications, focused on Windows Forms, Windows Presentation Framework (WPF), and UWP XAML. At the moment of the announcement, the .NET Standard was shown as a common basis for Windows Desktop Apps and .NET Core. Also, .NET Core was pictured as part of a composition containing ASP.NET Core, Entity Framework Core, and ML.NET. Support for developing and porting Windows desktop applications to .NET Core would be provided by "Windows Desktop Packs", additional components for compatible Windows platforms. ... Microsoft shows .NET 5 as a unifying platform for desktop, Web, cloud, mobile, gaming, IoT, and AI applications. It also shows explicit integration with all Visual Studio editions and with the command line interface (CLI). The goal of the new .NET version is to produce a single .NET runtime and framework, cross-platform, integrating the best features of .NET Core, .NET Framework, Xamarin, and Mono. 


Google’s Hangouts Chat gets chatbot boost with Dialogflow

chatbot catalog
By bringing Dialogflow to Hangouts Chat, Google wants to simplify the process of creating natural language bots users can interact with.  “With Dialogflow, you can create a natural-sounding conversational UI with just a few clicks,” said Jon Harmer, product manager, Google Cloud, in a blog post. “Because Dialogflow includes built-in Natural Language Understanding (NLU), your bot can quickly understand and respond to user messages.” Developers can make their Dialogflow bots available for use in Google’s team collaboration app via the Hangouts Chat Integrations page, where they can install a bot on their own account to test in the application.  In addition, a new Hubot adapter has been introduced, allowing developers to bring Hubot bots into Hangouts Chat. A chatbot catalog is also on its way to improve discoverability as the number of bots grows. That catalog will be available in the “coming months,” Google said.  "Google continues to aggressively move to enable intelligent chatbots and natural voice capabilities to add value and remove mundane steps in communications and collaboration,” said Wayne Kurtzman


U.S. adds Chinese technology companies to export blacklist

Among those added to the blacklist were AMD’s Chinese joint-venture partner Higon, Commerce said in the statement. Also included were Sugon, which Commerce identified as Higon’s majority owner, along with Chengdu Haiguang Integrated Circuit and Chengdu Haiguang Microelectronics Technology, both of which the department said Higon had an ownership interest in. The ban affects AMD’s Chinese joint venture THATIC, which was established in 2016. AMD uses THATIC to license its microprocessor technology to Chinese companies including Higon. THATIC, or Tianjin Haiguang Advanced Technology Investment Co., is a Chinese holding company comprising an AMD joint venture with two entities, according to an AMD regulatory filing. THATIC provides chips to Sugon, a Chinese server and computer maker. Lisa Su, AMD’s chief executive officer, said at a recent conference in Taiwan that AMD would not license its newer technologies to Chinese companies.


There are multiple approaches to zero trust, but the main ones are focused on identity, gateway and the device. However, as the tide of mobile and cloud continues to intensify, the limitations of gateway and identity-centric approaches become more apparent. For instance: Identity-centric approaches - provide limited visibility on device, app and threats, while also still relying on passwords; one of the main causes of a data breach; and Gateway-centric approach - limited visibility on device, apps and threats, while also assuming that all enterprise traffic goes through the enterprise network when in reality 25 per cent of enterprise traffic doesn’t go through their network. Only a mobile-centric zero trust approach addresses the security challenges of the perimeter-less modern enterprise while allowing the agility and anytime access that business needs. Mobile-centric zero trust seeks to verify more attributes than both these approaches before granting access. It validates the device, establishes user context, checks app authorisation, verifies the network, and detects and remediates threats before granting secure access to any device or user.


7 steps to enhance IoT security

iot internet of things chains security by mf3d getty
Controlling access within an IoT environment is one of the bigger security challenges companies face when connecting assets, products and devices. That includes controlling network access for the connected objects themselves. Organizations should first identify the behaviors and activities that are deemed acceptable by connected things within the IoT environment, and then put in place controls that account for this but at the same time don’t hinder processes, says John Pironti, president of consulting firm IP Architects and an expert on IoT security. “Instead of using a separate VLAN [virtual LAN] or network segment which can be restrictive and debilitating for IoT devices, implement context-aware access controls throughout your network to allow appropriate actions and behaviors, not just at the connection level but also at the command and data transfer levels,” Pironti says. This will ensure that devices can operate as planned while also limiting their ability to conduct malicious or unauthorized activities, Pironti says. “This process can also establish a baseline of expected behavior that can then be logged and monitored to identify anomalies or activities that fall outside of expected behaviors at acceptable thresholds,” he says.


Introduction to ELENA Programming Language

ELENA is a general-purpose, object-oriented, polymorphic language with late binding. It features message dispatching/manipulation, dynamic object mutation, a script engine / interpreter and group object support. ... There is an important distinction between "methods" and "messages". A method is a body of code while a message is something that is sent. A method is similar to a function. in this analogy, sending a message is similar to calling a function. An expression which invokes a method is called a "message sending expression". ELENA terminology makes a clear distinction between "message" and "method". A message-sending expression will send a message to the object. How the object responds to the message depends on the class of the object. Objects of differents classes will respond to the same message differently, since they will invoke different methods. Generic methods may accept any message with the specified signature (parameter types).


Cyber attackers using wider range of threats


“The key findings illustrate the importance of layered security protections,” said Corey Nachreiner, chief technology officer at WatchGuard Technologies. “Whether it be DNS-level filtering to block connections to malicious websites and phishing attempts, intrusion prevention services to ward off web application attacks, or multifactor authentication to prevent attacks using compromised credentials – it’s clear that modern cyber criminals are using a bevy of diverse attack methods. “The best way for organisations to protect themselves is with a unified security platform that offers a comprehensive range of security services,” he said. Another key finding of the report is that Mac OS malware on the rise. Mac malware first appeared on WatchGuard’s top 10 malware list in the third quarter of 2018, and now two variants have become prevalent enough to make the list in the first quarter of 2019, the report said. It added that this increase in Mac-based malware further debunks the myth that Macs are immune to viruses and malware and reinforces the importance of threat protection for all devices and systems.


Google Cloud Scheduler is Now Generally Available


Users can schedule a job in Cloud Scheduler by using its UI, or the CLI or API to invoke an HTTP/S endpoint, Cloud Pub/Sub topic or App Engine application. When a job starts, it will send Cloud Pub/Sub message or an HTTP request to a specified target destination on a recurring schedule. Subsequently, the target handler will execute the job and return a response of the outcome – either: A success code (2xx for HTTP/App Engine and 0 for Pub/Sub) when it succeeds; Or an error, resulting in the Cloud Scheduler retrying the job until it reaches the maximum number of attempts. Furthermore, once user schedules a job, they can monitor this in the Cloud Scheduler UI and check the status. Google Cloud Scheduler is not the only managed cron service available in the public cloud. Competitors Microsoft and Amazon already offered a scheduler service that has been available for quite some time. Microsoft offers the Azure Scheduler service, which became generally available in late 2015 and will be replaced by Azure Logic Apps Service, where developers can use the scheduler connector. Also, Logic Apps offers additional capabilities for application and process integration, data integration and B2B communication. Furthermore, AWS released the Batch service with similar capabilities to Scheduler in late 2016.


4 steps to developing responsible AI

A customer takes a picture as robotic arms collect pre-packaged dishes from a cold storage, done according to the diners' orders, at Haidilao's new artificial intelligence hotpot restaurant in Beijing, China, November 14, 2018. Picture taken November 14, 2018. REUTERS/Jason Lee - RC13639C1D90
As AI capabilities race ahead, government leaders, business leaders, academics and many others are more interested than ever in the ethics of AI as a practical matter, underlining the importance of having a strong ethical framework surrounding its use. But few really have the answer to developing ethical and responsible AI. Responsible AI is the practice of designing, building and deploying AI in a manner that empowers people and businesses, and fairly impacts customers and society – allowing companies to engender trust and scale AI with confidence. It is imperative for business leaders to understand AI and make a top-down commitment to the responsible use of AI. Central to this is taking a human-centric approach to AI thinking and development. It is not enough to have the correct data, or an algorithm that performs accurately. It is critical to incorporate systems of governance, design and training that provide a framework for successfully implementing AI in an organization. A strong Responsible AI framework entails mitigating the risks of AI with imperatives that address four key areas



Quote for the day:


"Leaders keep their eyes on the horizon, not just on the bottom line." -- Warren G. Bennis


Daily Tech Digest - June 24, 2019

Software Defined Perimeter (SDP): The deployment

sdn
SDP architectures are user-centric; meaning they validate the user and the device before permitting any access. Access policies are created based on user attributes. This is in comparison to the traditional networking systems, which are solely based on the IP addresses that do not consider the details of the user and their devices. Assessing contextual information is a key aspect of SDP. Anything tied to IP is ridiculous as we don’t have a valid hook to hang things on for security policy enforcement. We need to assess more than the IP address. For device and user assessment, we need to look deeper and go beyond IP not only as an anchor for network location but also for trust. Ideally, a viable SDP solution must analyze the user, role, device, location, time, network and application, along with the endpoint security state. Also, by leveraging the elements, such as directory group membership and IAM-assigned attributes and user roles, an organization can define and control access to the network resources. This can be performed in a way that’s meaningful to the business, security, and compliance teams.



Troy Hunt: Why Data Breaches Persist

"Anecdotally, it just feels like we're seeing a massive increase recently," he says. "I do wonder how much of it is due to legislation in various parts of the world around mandatory disclosure as well. Maybe we're just seeing more stuff come to the surface that otherwise may not have been exposed." But the potential for even bigger breaches also continues to rise, he says. "I don't see any good reason why data breaches should be reducing, certainly not in numbers," Hunt says. "I reckon there are a bunch of factors ... that are amplifying certainly the rate of breaches and also the scale of them." Such factors, he says, include the ever-increasing amounts of data being generated by organizations and individuals, the increasing use of the cloud - and the ease of losing control of data in the cloud - as well as the many more internet of things devices being brought into the world. In a video interview at the recent Infosecurity Europe conference, Hunt discusses: Long-term forecasts about data breach quantity and severity; Why breach perpetrators so often continue to be children; and How so much "smart" technology aimed at children continues to be beset by abysmal security.


Explore 4 key areas of enterprise network transformation


The top issue among the IT professionals surveyed was a lack of time to complete business initiative projects -- 43% of respondents said they struggle with this. In addition, 42% of respondents said they struggle to troubleshoot across the network as a whole. These blind spots can impede NetOps, network performance quality and, therefore, network transformation. Overall, a poorly performing network negatively affected business performance as a whole, respondents said. As such, respondents said they would prioritize the following areas of network performance: application performance, remote site performance, and endpoint and wireless performance. These improvements were among the most common goals for networking and IT professionals, according to the study. To support these network transformation goals, 37% of teams said they hope to upgrade their network performance management service. Teams can address several network performance issues with improved end-to-end visibility of their network and more insight into specific network issues. 


The Importance of Metrics to Agile Teams

Many programmes fail simply because teams could not agree or gain buy-in on meaningful sets of metrics or objectives. By its very nature, Agile encourages a myriad of different methodologies and workflows which vary by team and company. However, this does not mean that it’s impossible to agree achieve consensus on metrics for SI.  We believe the trick is to keep metrics simple and deterministic. Complex metrics will not be commonly understood and can be hard to measure consistently, which can lead to distrust. And deterministic metrics are key as improving them will actually deliver a better outcome.  As an example – you may measure Lead Times as an overall proxy of Time to Value, but Lead Time is a measure of the outcome. It’s also important to measure the things that drive/determine Lead Times, levers that teams can actively manage in order to drive improvements in the overarching metric (e.g. determinant metrics like Flow Efficiency). The deterministic metrics we advocate are designed to underpin team SI, in order to steadily improve Agile engineering effectiveness.


Microsoft’s road to multicloud support


An important part of Microsoft’s multicloud strategy is Azure Stack, which is preconfigured hardware to run Azure services that can be deployed locally. However, Kubernetes support on-premise via Azure Stack is behind the support for Kubernetes on the public Azure cloud. “We have Kubernetes on Azure Stack through a project called AKS Engine, which is in preview now,” says Monroy. He claims that AKS Engine will be generally available “soon”, adding: “We have a lot of customers who are using this today.” Serverless containers offer developers a way to achieve multicloud portability. In the Microsoft world, Azure AKS virtual nodes can be deployed to run workloads in Azure Container instances. “There is no lock-in, nothing Azure-specific – you just annotate your workloads and say ‘I want to opt in to this scaling capability’ and we’re able to provide per-second billing,” says Monroy. “If you take that same workload and you run it on a different cloud, it’s going to run.” But AKS virtual nodes are not yet available for Azure in the UK – although they are available elsewhere in Europe.


Data Governance and Data Architecture: There is No Silver Bullet


Having tools and technology facilitates the process of understanding the data, where it’s stored, how it’s organized, what the processes are, and how it’s all tied together, “but it’s not the ‘easy’ button that does everything for you.” Some companies have been trying to rely on metadata repositories alone, but the real key, he said, is in modeling. “A picture’s worth a thousand words, right?” Having the metadata and being able to do analytics and queries is helpful, but without pictures that explain how all the elements are related, and understanding the data lineage and life cycle, “You don’t have a chance.” Keeping higher-level business goals in mind is essential, but implementation should be focused on the fundamentals. “Metadata is a big piece of that too. A lot of the metadata is focused up at that higher level. Are your metadata management tools really getting down to the lower level?” Data and process modeling in particular are more important now than they’ve ever been before, he said, but that modeling should be coupled with the reverse engineering capabilities and all the tools and processes needed to do proper governance.


Disposable Technology: A Concept Whose Time Has Come


Modern digital companies like Google, Facebook, Twitter, Apple, Netflix, Amazon, and AirBnB have taken a technology architecture approach that increasingly treats the technology infrastructure as “disposable” using open source technologies. And the reason for this open approach, in my humble opinion, is two-fold: Firstly, building upon open source technologies provides the flexibility, agility and mobility for companies to move to the next best technology without the constraints  ... Modern digital companies are basing their technology infrastructure on open source technologies that not only prevents vendor architectural lock-in but also allows them to advance the technology capabilities at their pace and at the pace of the business; and Secondly and more importantly, these digital companies understand that the technology isn’t the source of business value and differentiation. They understand that the source of business value and differentiation is: the data that these organizations are masterfully amassing via every customer engagement and every usage of the product or service; and the customer, product and operational insights that leads to new Intellectual Property (IP) monetization and commercialization opportunities.


Blue Prism acquires UK’s Thoughtonomy to expand its RPA platform with more AI

Inside the first museum show by DARPA, the ‘Pentagon’s brain’
Robotic process automation — which lets organizations shift repetitive back-office tasks to machines to complete — has been a hot area of growth in the world of enterprise IT, and now one of the companies that’s making waves in the area has acquired a smaller startup to continue extending its capabilities. Blue Prism, which helped coin the term RPA when it was founded back in 2001, has announced that it is buying Thoughtonomy, which has built a cloud-based AI engine that delivers RPA-based solutions on a SaaS framework. Blue Prism is publicly traded on the London Stock Exchange — where its market cap is around £1.3 billion ($1.6 billion), and in a statement to the market alongside its half-year earnings, it said it would be paying up to £80 million ($100 million) for the firm. The deal is coming in a combination of cash and stock: £12.5 million payable on completion of the deal, £23 million in shares payable on completion of the deal, up to £20 million payable a year after the deal closes, up to £4.5 million in cash after 18 months, and a final £20 million on the second anniversary of the deal closing, in shares.


Codes Tell the Story: A Fruitful Supply Chain Flourishes


Sharing anecdotes from the process, McMillan gave the audience several practical tips. She noted that Usage and Procedure Logging (UPL) provided invaluable insights for the migration. “This tells you not only which objects you’re touching, but also which business processes they’re calling: Warehouse or inventory management? We used this to figure out what’s really being used,” with respect to custom coding. She said the results were very promising: “What we found out, in production, is that almost 60% of the custom code developed in the last 5-10 years, was not being used! I can’t tell you how many of those custom scripts were used just once, and never touched again.” This was fantastic news, because custom code can cause serious headaches when undergoing a migration of this magnitude. Every last bit of custom code needs to be vetted, which can be very time-consuming, and error-prone. “Some of the most tedious parts were really challenging,” McMillan said, “having to go through object by object took a lot of time; certain tables that SAP made obsolete; fields that the type has changed. When you do the immigration, you can’t code the same way used to code.”


Obscuring Complexity

How can MDSD obscure the complexity of your application code? It is tricky but it can be done. The generator outputs the code that implements the API resources, so the developers don't have to worry about coding that. However, if you use the generator as a one-time code wizard and commit the output to your version-controlled source code repository (e.g. git), then all you did was save some initial coding time. You didn't really hide anything, since the developers will have to study and maintain the generated code. To truly obscure the complexity of this code, you have to commit the model into your version-controlled source code repository, but not the generated source code. You need to generate that output source from the model every time you build the code. You will need to add that generator step to all your build pipelines. Maven users will want to configure the swagger-codegen-maven-plugin in their pom file. That plugin is a module in the swagger-codegen project. What if you do have to make changes to the generated source code? That is why you will have to assume ownership of the templates and also commit them to your version-controlled source code repository.



Quote for the day:


"Do not compromise yourself. You are all you have got." -- Janis Joplin