Daily Tech Digest - June 26, 2019

MongoDB CEO tells hard truths about commercial open source

opensourceistock-485587762boygovideo.jpg
As ugly as that sentiment may seem, it's (mostly) true. Not completely, because MongoDB has had some external contributions. For example, Justin Dearing responded to Ittycheria's claim thus: "As someone that has made a (very tiny) contribution to the [MongoDB] server source code, this is kind of insulting to hear [it] said this way." There's also the inconvenient truth that part of MongoDB's popularity has been the broad array of drivers available. While the company writes the primary drivers used with MongoDB, the company relies on third-party developers to pick up the slack on lesser-used drivers. Those drivers, though less used, still contribute to the overall value of MongoDB. But it's largely true, all the same. And it's probably even more true of all the other open source companies that have been lining up to complain about public clouds like AWS "stealing" their code. None of these companies is looking for code contributions. Not really. When AWS, for example, has tried to commit code, they've been rebuffed.



Sen. Wyden Asks NIST to Develop Secure File Sharing Standards
Wyden also recommends implementing new technology and better training for government workers to help ensure that sensitive documents can be sent securely with better encryption. "Many people incorrectly believe that password-protected .zip files can protect sensitive data," Wyden writes in the letter. "Indeed, many password-protected .zip files can be easily broken with off-the-shelf hacking tool. This is because many of the software programs that create .zip files use a weak encryption algorithm by default. While secure methods to protect and share data exist and are feely available, many people do not know which software they should use." Wyden notes that the increasing number of data breaches, as well as nation-state attacks, point to the need to develop new standards, protocols and guidelines to ensure that sensitive files are encrypted and can be securely shared. He also asked NIST to develop easy-to-use instructions so that the public to take advantage of newer technologies. A spokesperson for NIST tells Information Security Media Group that the agency is reviewing Wyden's letter and will provide a response to the senator's concerns and questions.


Q&A on the Book Empathy at Work


Emotional empathy is inherent in us; when we see someone laughing, we smile. When we see someone crying, we feel sad. Cognitive empathy is understanding what a person is thinking or feeling; this one is often referred to as “perspective taking” because we are actively engaged in attempting to “get” where the person is coming from. Empathic concern is being so moved by what another person is going through that we are empowered to act. The majority of the time when people are talking about empathy, they are referring to empathic concern.  These definitions of empathy are all accurate and informative. But a big point that I always try to make is that empathy is a verb; it’s a muscle that must be worked consistently for any real change to occur. That’s why everyone’s definition of what empathy is in their own lives is going to be a little bit different. We all feel understand in a different way, so each person truly has to define what empathy looks and feels like for themselves. For me, it’s allowing me to finish my thoughts. I’m a stutterer, and it sometimes takes me a bit to get a word or a thought out.


A Developer's Journey into AI and Machine Learning

There are challenges. Microsoft really has to sell developers and data engineers that data science, AI and ML is not some big, scary, hyper-nerd technology. There are corners where that is true, but this field is open to anyone who's willing to learn. And Microsoft is certainly doing its part with streamlined services and tools like Cognitive Services and ML.NET. End of the day, anyone who is already a developer/engineer clearly has the core capabilities to be successful in this field. All people need to do is level up some of their skills and add new ones. In some cases, people will need to unlearn what they've already learned, particularly around the field of certainty. The way I like to put it, a DBA (database admin) will always give a precise answer. Inaccurate maybe, but never imprecise. "There are 864,782 records in table X," for example. But a data science/ML/AI practitioner deals with probabilities. "There's a 86.568% chance there's a cat in this picture." It's a change of mindset as much as a change in technologies.


Break free from traditional network security


While it can be argued that perimeterless network security will become essential to keep the wheels of commerce turning, Simon Persin, director at Turnkey Consulting, says: “A lack of network perimeters needs to be matched with technology that can prevent damage.” In a perimeterless network architecture, the design and behaviour of the network infrastructure should aim to prevent IT assets being exposed to threats such as rogue code. Persin says that by understanding which protocols are allowed to run on the network, an SDN can allow people to perform the legitimate tasks required by their role. Within a network architecture, a software-defined network separates the forwarding and control planes. Paddy Francis, chief technology officer for Airbus CyberSecurity, says this means routers essentially become basic switches, forwarding network traffic in accordance with rules defined by a central controller. What this means from a monitoring perspective, says Francis, is that packet-by-packet statistics can be sent back to the central controller from each forwarding element.


The Unreasonable Effectiveness of Software Analytics

Software analytics distills large amounts of low-value data into small chunks of very-high-value data. Such chunks are often predictive; that is, they can offer a somewhat accurate prediction about some quality attribute of future projects—for example, the location of potential defects or the development cost. In theory, software analytics shouldn’t work because software project behavior shouldn’t be predictable. Consider the wide, ever-changing range of tasks being implemented by software and the diverse, continually evolving tools used for software’s construction (for example, IDEs and version control tools). Let’s make that worse. Now consider the constantly changing platforms on which the software executes (desktops, laptops, mobile devices, RESTful services, and so on) or the system developers’ varying skills and experience. Given all that complex and continual variability, every software project could be unique. And, if that were true, any lesson learned from past projects would have limited applicability for future projects.


Robots can now decode the cryptic language of central bankers


But robots aren’t that smart yet, according to Dirk Schumacher, a Frankfurt-based economist at French lender Natixis SA, which this month started publishing an automated sentiment index of European Central Bank meeting statements. “The question is how intelligent it can become,” he said. “Maybe in a few years time we’ll have algorithms which get everything right, but at this stage I find it a nice crosscheck to verify one’s own assessments.” The main edge humans still have over machines is being able to read and understand ambiguity, Schumacher said. While Natixis’ system can quantify how optimistic or pessimistic ECB policy makers are looking at word choice and intensity, it can’t discern if a policy maker said something ironic — although arguably not all humans could either. “It’s not a perfect science and it’s hard to see that humans will be replaced by these methods anytime soon,” said Elisabetta Basilico, an investment adviser who writes about quantitative finance. Prattle, which was recently acquired by Liquidnet, claims its software accurately predicts G10 interest rate moves 9.7 times out of 10.


Error-Resilient Server Ecosystems for Edge and Cloud Datacenters

Realizing our proposed errorresilient, energy-efficient ecosystem faces many challenges, in part because it requires the design of new technologies and the adoption of a system operation philosophy that departs from the current pessimistic one. The UniServer Consortium (www .uniserver2020.eu)—consisting of academic institutions and leading companies such as AppliedMicro Circuits, ARM, and IBM—is working toward such a vision. Its goal is the development of a universal system architecture and software ecosystem for servers used for cloud- and edge-based datacenters. The European Community’s Horizon 2020 research program is funding UniServer (grant no. 688540). The consortium is already implementing our proposed ecosystem in a state-of-the art X-Gene2 eight-core, ARMv8-based microserver with 28- nm feature sizes. The initial characterization of the server’s processing cores shows that there is a significant safety margin in the supply voltage used to operate each core. Results show that the some cores could use 10 percent below the nominal supply voltage that the manufacturer advises. This could lead to a 38 percent power savings.


What is edge computing, and how can you get started?


Edge computing architecture is a modernized version of data center and cloud architectures with the enhanced efficiency of having applications and data closer to sources, according to Andrew Froehlich, president of West Gate Networks. Edge computing also seeks to eliminate bandwidth and throughput issues caused by the distance between users and applications. Edge computing is not the same as the network edge, which is more similar to a town line. A network edge is one or more boundaries within a network to divide the enterprise-owned and third-party-operated parts of the network, Froehlich said. This distinction enables IT teams to designate control of network equipment. Edge computing's ability to bring compute and storage data in or near enterprise branches is attractive to those who require quick response times and support for large data amounts. Edge computing can bring several benefits to enterprise networks with centralized management, lights-out operations and cloud-style infrastructure, according to John Burke


Cloudflare Criticizes Verizon Over Internet Outage


Cloudflare put the blame squarely on Verizon for not adequately filtering erroneous routes announced by an ISP, DQE Communications, in Pennsylvania. It pulled no punches, saying there was no good reason for Verizon's failure other than "sloppiness or laziness." "The leak should have stopped at Verizon," writes Tom Strickx, a Cloudflare network software engineer, in the blog post. "However, against numerous best practices outlined below, Verizon's lack of filtering turned this into a major incident that affected many Internet services such as Amazon, Linode and Cloudflare." DQE used a BGP optimizer, which allows for more specific BGP routes, Strickx writes. Those more specific routes trump more general ones in announcements. DQE announced the routes to one of its customers, Allegheny Technologies, a metals manufacturing company. Then, those routes went to Verizon. To be fair, the ultimate responsibility falls on DQE for announcing the wrong routes. Allegheny is somewhat to blame for pushing those routes on. But then Verizon - one of the largest transit providers in the world - propagated the routes.



Quote for the day:


"Defeat is not the worst of failures. Not to have tried is the true failure." -- George Woodberry


Daily Tech Digest - June 25, 2019

AI in IoT elevates data analysis to the next level


In a typical enterprise network, IoT exists beyond the boundaries of the cloud, passing data back through a firewall, where it then takes residence in storage and is made available to some process or application. But with so many different devices reporting -- a number that will steadily increase -- traffic problems are inevitable. Managing the flow of so much data from so many endpoints is beyond the resources of most companies. But wait; it gets worse. Many IoT applications are two-way streets, where data gathered by sensors has consequences in the locations they're reporting on; for example, adjusting the power consumption of a building based on changes in occupancy and weather. In many such cases, there's no time for data to make a round trip to a cloud. ... The fix for these problems is edge computing -- extending the processing power of an enterprise network by adding gateways and IoT devices that offer local processing power.


.NET Core: Past, Present, and Future


The highlight of the .NET Core 3.0 announcement was the support for Windows desktop applications, focused on Windows Forms, Windows Presentation Framework (WPF), and UWP XAML. At the moment of the announcement, the .NET Standard was shown as a common basis for Windows Desktop Apps and .NET Core. Also, .NET Core was pictured as part of a composition containing ASP.NET Core, Entity Framework Core, and ML.NET. Support for developing and porting Windows desktop applications to .NET Core would be provided by "Windows Desktop Packs", additional components for compatible Windows platforms. ... Microsoft shows .NET 5 as a unifying platform for desktop, Web, cloud, mobile, gaming, IoT, and AI applications. It also shows explicit integration with all Visual Studio editions and with the command line interface (CLI). The goal of the new .NET version is to produce a single .NET runtime and framework, cross-platform, integrating the best features of .NET Core, .NET Framework, Xamarin, and Mono. 


Google’s Hangouts Chat gets chatbot boost with Dialogflow

chatbot catalog
By bringing Dialogflow to Hangouts Chat, Google wants to simplify the process of creating natural language bots users can interact with.  “With Dialogflow, you can create a natural-sounding conversational UI with just a few clicks,” said Jon Harmer, product manager, Google Cloud, in a blog post. “Because Dialogflow includes built-in Natural Language Understanding (NLU), your bot can quickly understand and respond to user messages.” Developers can make their Dialogflow bots available for use in Google’s team collaboration app via the Hangouts Chat Integrations page, where they can install a bot on their own account to test in the application.  In addition, a new Hubot adapter has been introduced, allowing developers to bring Hubot bots into Hangouts Chat. A chatbot catalog is also on its way to improve discoverability as the number of bots grows. That catalog will be available in the “coming months,” Google said.  "Google continues to aggressively move to enable intelligent chatbots and natural voice capabilities to add value and remove mundane steps in communications and collaboration,” said Wayne Kurtzman


U.S. adds Chinese technology companies to export blacklist

Among those added to the blacklist were AMD’s Chinese joint-venture partner Higon, Commerce said in the statement. Also included were Sugon, which Commerce identified as Higon’s majority owner, along with Chengdu Haiguang Integrated Circuit and Chengdu Haiguang Microelectronics Technology, both of which the department said Higon had an ownership interest in. The ban affects AMD’s Chinese joint venture THATIC, which was established in 2016. AMD uses THATIC to license its microprocessor technology to Chinese companies including Higon. THATIC, or Tianjin Haiguang Advanced Technology Investment Co., is a Chinese holding company comprising an AMD joint venture with two entities, according to an AMD regulatory filing. THATIC provides chips to Sugon, a Chinese server and computer maker. Lisa Su, AMD’s chief executive officer, said at a recent conference in Taiwan that AMD would not license its newer technologies to Chinese companies.


There are multiple approaches to zero trust, but the main ones are focused on identity, gateway and the device. However, as the tide of mobile and cloud continues to intensify, the limitations of gateway and identity-centric approaches become more apparent. For instance: Identity-centric approaches - provide limited visibility on device, app and threats, while also still relying on passwords; one of the main causes of a data breach; and Gateway-centric approach - limited visibility on device, apps and threats, while also assuming that all enterprise traffic goes through the enterprise network when in reality 25 per cent of enterprise traffic doesn’t go through their network. Only a mobile-centric zero trust approach addresses the security challenges of the perimeter-less modern enterprise while allowing the agility and anytime access that business needs. Mobile-centric zero trust seeks to verify more attributes than both these approaches before granting access. It validates the device, establishes user context, checks app authorisation, verifies the network, and detects and remediates threats before granting secure access to any device or user.


7 steps to enhance IoT security

iot internet of things chains security by mf3d getty
Controlling access within an IoT environment is one of the bigger security challenges companies face when connecting assets, products and devices. That includes controlling network access for the connected objects themselves. Organizations should first identify the behaviors and activities that are deemed acceptable by connected things within the IoT environment, and then put in place controls that account for this but at the same time don’t hinder processes, says John Pironti, president of consulting firm IP Architects and an expert on IoT security. “Instead of using a separate VLAN [virtual LAN] or network segment which can be restrictive and debilitating for IoT devices, implement context-aware access controls throughout your network to allow appropriate actions and behaviors, not just at the connection level but also at the command and data transfer levels,” Pironti says. This will ensure that devices can operate as planned while also limiting their ability to conduct malicious or unauthorized activities, Pironti says. “This process can also establish a baseline of expected behavior that can then be logged and monitored to identify anomalies or activities that fall outside of expected behaviors at acceptable thresholds,” he says.


Introduction to ELENA Programming Language

ELENA is a general-purpose, object-oriented, polymorphic language with late binding. It features message dispatching/manipulation, dynamic object mutation, a script engine / interpreter and group object support. ... There is an important distinction between "methods" and "messages". A method is a body of code while a message is something that is sent. A method is similar to a function. in this analogy, sending a message is similar to calling a function. An expression which invokes a method is called a "message sending expression". ELENA terminology makes a clear distinction between "message" and "method". A message-sending expression will send a message to the object. How the object responds to the message depends on the class of the object. Objects of differents classes will respond to the same message differently, since they will invoke different methods. Generic methods may accept any message with the specified signature (parameter types).


Cyber attackers using wider range of threats


“The key findings illustrate the importance of layered security protections,” said Corey Nachreiner, chief technology officer at WatchGuard Technologies. “Whether it be DNS-level filtering to block connections to malicious websites and phishing attempts, intrusion prevention services to ward off web application attacks, or multifactor authentication to prevent attacks using compromised credentials – it’s clear that modern cyber criminals are using a bevy of diverse attack methods. “The best way for organisations to protect themselves is with a unified security platform that offers a comprehensive range of security services,” he said. Another key finding of the report is that Mac OS malware on the rise. Mac malware first appeared on WatchGuard’s top 10 malware list in the third quarter of 2018, and now two variants have become prevalent enough to make the list in the first quarter of 2019, the report said. It added that this increase in Mac-based malware further debunks the myth that Macs are immune to viruses and malware and reinforces the importance of threat protection for all devices and systems.


Google Cloud Scheduler is Now Generally Available


Users can schedule a job in Cloud Scheduler by using its UI, or the CLI or API to invoke an HTTP/S endpoint, Cloud Pub/Sub topic or App Engine application. When a job starts, it will send Cloud Pub/Sub message or an HTTP request to a specified target destination on a recurring schedule. Subsequently, the target handler will execute the job and return a response of the outcome – either: A success code (2xx for HTTP/App Engine and 0 for Pub/Sub) when it succeeds; Or an error, resulting in the Cloud Scheduler retrying the job until it reaches the maximum number of attempts. Furthermore, once user schedules a job, they can monitor this in the Cloud Scheduler UI and check the status. Google Cloud Scheduler is not the only managed cron service available in the public cloud. Competitors Microsoft and Amazon already offered a scheduler service that has been available for quite some time. Microsoft offers the Azure Scheduler service, which became generally available in late 2015 and will be replaced by Azure Logic Apps Service, where developers can use the scheduler connector. Also, Logic Apps offers additional capabilities for application and process integration, data integration and B2B communication. Furthermore, AWS released the Batch service with similar capabilities to Scheduler in late 2016.


4 steps to developing responsible AI

A customer takes a picture as robotic arms collect pre-packaged dishes from a cold storage, done according to the diners' orders, at Haidilao's new artificial intelligence hotpot restaurant in Beijing, China, November 14, 2018. Picture taken November 14, 2018. REUTERS/Jason Lee - RC13639C1D90
As AI capabilities race ahead, government leaders, business leaders, academics and many others are more interested than ever in the ethics of AI as a practical matter, underlining the importance of having a strong ethical framework surrounding its use. But few really have the answer to developing ethical and responsible AI. Responsible AI is the practice of designing, building and deploying AI in a manner that empowers people and businesses, and fairly impacts customers and society – allowing companies to engender trust and scale AI with confidence. It is imperative for business leaders to understand AI and make a top-down commitment to the responsible use of AI. Central to this is taking a human-centric approach to AI thinking and development. It is not enough to have the correct data, or an algorithm that performs accurately. It is critical to incorporate systems of governance, design and training that provide a framework for successfully implementing AI in an organization. A strong Responsible AI framework entails mitigating the risks of AI with imperatives that address four key areas



Quote for the day:


"Leaders keep their eyes on the horizon, not just on the bottom line." -- Warren G. Bennis


Daily Tech Digest - June 24, 2019

Software Defined Perimeter (SDP): The deployment

sdn
SDP architectures are user-centric; meaning they validate the user and the device before permitting any access. Access policies are created based on user attributes. This is in comparison to the traditional networking systems, which are solely based on the IP addresses that do not consider the details of the user and their devices. Assessing contextual information is a key aspect of SDP. Anything tied to IP is ridiculous as we don’t have a valid hook to hang things on for security policy enforcement. We need to assess more than the IP address. For device and user assessment, we need to look deeper and go beyond IP not only as an anchor for network location but also for trust. Ideally, a viable SDP solution must analyze the user, role, device, location, time, network and application, along with the endpoint security state. Also, by leveraging the elements, such as directory group membership and IAM-assigned attributes and user roles, an organization can define and control access to the network resources. This can be performed in a way that’s meaningful to the business, security, and compliance teams.



Troy Hunt: Why Data Breaches Persist

"Anecdotally, it just feels like we're seeing a massive increase recently," he says. "I do wonder how much of it is due to legislation in various parts of the world around mandatory disclosure as well. Maybe we're just seeing more stuff come to the surface that otherwise may not have been exposed." But the potential for even bigger breaches also continues to rise, he says. "I don't see any good reason why data breaches should be reducing, certainly not in numbers," Hunt says. "I reckon there are a bunch of factors ... that are amplifying certainly the rate of breaches and also the scale of them." Such factors, he says, include the ever-increasing amounts of data being generated by organizations and individuals, the increasing use of the cloud - and the ease of losing control of data in the cloud - as well as the many more internet of things devices being brought into the world. In a video interview at the recent Infosecurity Europe conference, Hunt discusses: Long-term forecasts about data breach quantity and severity; Why breach perpetrators so often continue to be children; and How so much "smart" technology aimed at children continues to be beset by abysmal security.


Explore 4 key areas of enterprise network transformation


The top issue among the IT professionals surveyed was a lack of time to complete business initiative projects -- 43% of respondents said they struggle with this. In addition, 42% of respondents said they struggle to troubleshoot across the network as a whole. These blind spots can impede NetOps, network performance quality and, therefore, network transformation. Overall, a poorly performing network negatively affected business performance as a whole, respondents said. As such, respondents said they would prioritize the following areas of network performance: application performance, remote site performance, and endpoint and wireless performance. These improvements were among the most common goals for networking and IT professionals, according to the study. To support these network transformation goals, 37% of teams said they hope to upgrade their network performance management service. Teams can address several network performance issues with improved end-to-end visibility of their network and more insight into specific network issues. 


The Importance of Metrics to Agile Teams

Many programmes fail simply because teams could not agree or gain buy-in on meaningful sets of metrics or objectives. By its very nature, Agile encourages a myriad of different methodologies and workflows which vary by team and company. However, this does not mean that it’s impossible to agree achieve consensus on metrics for SI.  We believe the trick is to keep metrics simple and deterministic. Complex metrics will not be commonly understood and can be hard to measure consistently, which can lead to distrust. And deterministic metrics are key as improving them will actually deliver a better outcome.  As an example – you may measure Lead Times as an overall proxy of Time to Value, but Lead Time is a measure of the outcome. It’s also important to measure the things that drive/determine Lead Times, levers that teams can actively manage in order to drive improvements in the overarching metric (e.g. determinant metrics like Flow Efficiency). The deterministic metrics we advocate are designed to underpin team SI, in order to steadily improve Agile engineering effectiveness.


Microsoft’s road to multicloud support


An important part of Microsoft’s multicloud strategy is Azure Stack, which is preconfigured hardware to run Azure services that can be deployed locally. However, Kubernetes support on-premise via Azure Stack is behind the support for Kubernetes on the public Azure cloud. “We have Kubernetes on Azure Stack through a project called AKS Engine, which is in preview now,” says Monroy. He claims that AKS Engine will be generally available “soon”, adding: “We have a lot of customers who are using this today.” Serverless containers offer developers a way to achieve multicloud portability. In the Microsoft world, Azure AKS virtual nodes can be deployed to run workloads in Azure Container instances. “There is no lock-in, nothing Azure-specific – you just annotate your workloads and say ‘I want to opt in to this scaling capability’ and we’re able to provide per-second billing,” says Monroy. “If you take that same workload and you run it on a different cloud, it’s going to run.” But AKS virtual nodes are not yet available for Azure in the UK – although they are available elsewhere in Europe.


Data Governance and Data Architecture: There is No Silver Bullet


Having tools and technology facilitates the process of understanding the data, where it’s stored, how it’s organized, what the processes are, and how it’s all tied together, “but it’s not the ‘easy’ button that does everything for you.” Some companies have been trying to rely on metadata repositories alone, but the real key, he said, is in modeling. “A picture’s worth a thousand words, right?” Having the metadata and being able to do analytics and queries is helpful, but without pictures that explain how all the elements are related, and understanding the data lineage and life cycle, “You don’t have a chance.” Keeping higher-level business goals in mind is essential, but implementation should be focused on the fundamentals. “Metadata is a big piece of that too. A lot of the metadata is focused up at that higher level. Are your metadata management tools really getting down to the lower level?” Data and process modeling in particular are more important now than they’ve ever been before, he said, but that modeling should be coupled with the reverse engineering capabilities and all the tools and processes needed to do proper governance.


Disposable Technology: A Concept Whose Time Has Come


Modern digital companies like Google, Facebook, Twitter, Apple, Netflix, Amazon, and AirBnB have taken a technology architecture approach that increasingly treats the technology infrastructure as “disposable” using open source technologies. And the reason for this open approach, in my humble opinion, is two-fold: Firstly, building upon open source technologies provides the flexibility, agility and mobility for companies to move to the next best technology without the constraints  ... Modern digital companies are basing their technology infrastructure on open source technologies that not only prevents vendor architectural lock-in but also allows them to advance the technology capabilities at their pace and at the pace of the business; and Secondly and more importantly, these digital companies understand that the technology isn’t the source of business value and differentiation. They understand that the source of business value and differentiation is: the data that these organizations are masterfully amassing via every customer engagement and every usage of the product or service; and the customer, product and operational insights that leads to new Intellectual Property (IP) monetization and commercialization opportunities.


Blue Prism acquires UK’s Thoughtonomy to expand its RPA platform with more AI

Inside the first museum show by DARPA, the ‘Pentagon’s brain’
Robotic process automation — which lets organizations shift repetitive back-office tasks to machines to complete — has been a hot area of growth in the world of enterprise IT, and now one of the companies that’s making waves in the area has acquired a smaller startup to continue extending its capabilities. Blue Prism, which helped coin the term RPA when it was founded back in 2001, has announced that it is buying Thoughtonomy, which has built a cloud-based AI engine that delivers RPA-based solutions on a SaaS framework. Blue Prism is publicly traded on the London Stock Exchange — where its market cap is around £1.3 billion ($1.6 billion), and in a statement to the market alongside its half-year earnings, it said it would be paying up to £80 million ($100 million) for the firm. The deal is coming in a combination of cash and stock: £12.5 million payable on completion of the deal, £23 million in shares payable on completion of the deal, up to £20 million payable a year after the deal closes, up to £4.5 million in cash after 18 months, and a final £20 million on the second anniversary of the deal closing, in shares.


Codes Tell the Story: A Fruitful Supply Chain Flourishes


Sharing anecdotes from the process, McMillan gave the audience several practical tips. She noted that Usage and Procedure Logging (UPL) provided invaluable insights for the migration. “This tells you not only which objects you’re touching, but also which business processes they’re calling: Warehouse or inventory management? We used this to figure out what’s really being used,” with respect to custom coding. She said the results were very promising: “What we found out, in production, is that almost 60% of the custom code developed in the last 5-10 years, was not being used! I can’t tell you how many of those custom scripts were used just once, and never touched again.” This was fantastic news, because custom code can cause serious headaches when undergoing a migration of this magnitude. Every last bit of custom code needs to be vetted, which can be very time-consuming, and error-prone. “Some of the most tedious parts were really challenging,” McMillan said, “having to go through object by object took a lot of time; certain tables that SAP made obsolete; fields that the type has changed. When you do the immigration, you can’t code the same way used to code.”


Obscuring Complexity

How can MDSD obscure the complexity of your application code? It is tricky but it can be done. The generator outputs the code that implements the API resources, so the developers don't have to worry about coding that. However, if you use the generator as a one-time code wizard and commit the output to your version-controlled source code repository (e.g. git), then all you did was save some initial coding time. You didn't really hide anything, since the developers will have to study and maintain the generated code. To truly obscure the complexity of this code, you have to commit the model into your version-controlled source code repository, but not the generated source code. You need to generate that output source from the model every time you build the code. You will need to add that generator step to all your build pipelines. Maven users will want to configure the swagger-codegen-maven-plugin in their pom file. That plugin is a module in the swagger-codegen project. What if you do have to make changes to the generated source code? That is why you will have to assume ownership of the templates and also commit them to your version-controlled source code repository.



Quote for the day:


"Do not compromise yourself. You are all you have got." -- Janis Joplin


Daily Tech Digest - June 23, 2019

Facebook's Libra Cryptocurrency Prompts Privacy Backlash

Facebook's Libra Cryptocurrency Prompts Privacy Backlash
Facebook's cryptocurrency plans have raised bipartisan concerns, with Rep. Patrick McHenry, R-N.C., telling The Verge: "It is incumbent upon us as policymakers to understand Project Libra. We need to go beyond the rumors and speculations and provide a forum to assess this project and its potential unprecedented impact on the global financial system." On Wednesday, the U.S. Senate Banking Committee announced it would hold a hearing about the company's cryptocurrency plans on July 16. So far, the committee has not released a list of witnesses it intends to call, according to Reuters. A Facebook spokesman tells Information Security Media Group: "We look forward to responding to lawmakers' questions as this process moves forward." Besides new concerns over its cryptocurrency plans, Facebook is already facing scrutiny from the U.S. Federal Trade Commission regarding its data-sharing practices, with the company preparing to pay as much as $3 billion fine. Facebook has been bound by an agreement with the FTC since 2011 that stems from previous privacy missteps, including sharing data without consent.



A CISO's Insights on Breach Detection

You have to identify what potentially anomalous behavior is, know what you're logging and reporting on, and make sure you have team members who are available to address these anomalies." Key steps, the CISO says, include using appropriate technologies, such as security incident and event monitoring tools, as well as effectively using security team resources "to conduct root cause analysis to identify what's going on." Parker will be a featured speaker at ISMG's Healthcare Security Summit in New York on June 25. He will join other CISOs and security experts who will address breach detection and an array of other top security challenges. In the interview (see audio link below photo), Parker also discusses: Conquering "alarm fatigue," which often slows the process of identifying breaches; Why many insider breaches are more difficult to detect than some incidents involving hackers; and The growing breach risks posed by supply chain vendors and other third parties, including incidents potentially involving compromised application programming interfaces.


Rise in business-led IT spend increases risks and opportunities


Despite commanding larger budgets for technology, CIOs also seem to be losing influence, with the percentage of CIOs sitting on the board falling from 71% to 58% in two years, according to the research. However, Bates does not think fewer CIOs sitting on the board will have a negative impact on business-led IT projects – or even IT projects in general. “CIOs continue to exert a strong degree of influence and are being joined by a new generation of technology-savvy executives like the chief technology officer, chief digital officer and chief data officer,” he said. “As organisations mature into this new paradigm of a coalition of technology leaders, there will be more effective governance at all levels. “We are at a moment in time where the CIO is still best positioned to advise the board and senior business leaders on technology and will increasingly have deep subject matter expertise from fellow executives to inform decision-making.” Beyond the disconnect between business and IT, another issue highlighted by the study is the slow progress in diversity and inclusion, with 74% of IT leaders polled saying related initiatives are, at most, “moderately successful”, with only minimal growth in women on tech teams – rising to 22% this year, compared with 21% last year.


MongoDB grows its solution portfolio while boosting its flagship platform

Positioning its document database as a platform for AI/machine learning app developers, MongoDB this week announced the beta of MongoDB Atlas Data Lake. This new serverless offering supports rich data analytics via the MongoDB Query Language. It supports multiple polymorphic data in multiple schema-free formats at any scale, compressed or uncompressed. It will support a consolidated user interface and billing with on-demand usage-based pricing. For storage, MongoDB Atlas Data Lake allows customers to “bring your own bucket” such as AWS S3, with MongoDB only charging customers for the ability to query the stored data through the Data Lake Service. It allows customers to query data quickly on S3 in any format, including JSON, BSON, CSV, TSV, Parquet and Avro, using the MongoDB Query Language. By bringing the MongoDB Query Language to the MongoDB Atlas Data Lake, this service enables developers to use that language across data on S3, making the querying of massive data sets easier and more cost-effective.


Using a Microservices Architecture to Develop Microapps


Let's look at microapps in terms of mobility. Because today we have a problem. Modern enterprises often have 20 or more web and mobile apps that you, as an employee, have to use just to get your job done. And how much functionality do you really use within these apps? It can be hard to find what we want, when we want. We are also suffering from app fatigue, both as app consumers in our personal lives and in our work lives as we deal with tens to hundreds of apps on our devices. As app developers we are also suffering, because these 20+ apps have to be maintained. We are also fielding requests for more and more apps, just adding to the maintenance pile. ... However, with a proper microapps platform, instead of marching through these steps for each new mobile experience we create, microapps allow us to do the hard and boring stuff once, and focus on the practical features and engaging experiences that we ultimately want to deliver! ... From a technical perspective, Kinvey Microapps enables you, as an app developer, to be dramatically more productive delivering mobile experiences to your users.


Ransomware gang hacks MSPs to deploy ransomware on customer systems

Hanslovan said hackers breached MSPs via exposed RDP (Remote Desktop Endpoints), elevated privileges inside compromised systems, and manually uninstalled AV products, such as ESET and Webroot. In the next stage of the attack, the hackers searched for accounts for Webroot SecureAnywhere, remote management software (console) used by MSPs to manage remotely-located workstations (in the network of their customers). According to Hanslovan, the hackers used the console to execute a Powershell script on remote workstations; script that downloaded and installed the Sodinokibi ransomware. The Huntress Lab CEO said at least three MSPs had been hacked this way. Some Reddit users also reported that in some cases, hackers might have also used the Kaseya VSA remote management console, but this was never formally confirmed. "Two companies mentioned only the hosts running Webroot were infected," Hanslovan said. 


Navigating the Path Toward Becoming an Intelligent Enterprise


From an operations standpoint, the Index indicates that 82 percent of surveyed companies are sharing information from their IoT solutions with employees more than once a day. This is an increase of 12 percent from the previous year. In fact, approximately two-thirds of these companies share operational data about enterprise assets, including status, location, utilization or preferences, in real- or near-real time to help drive better more timely decisions. This shows that brands are making the transition to Industry 4.0—using connected, automated systems to collect and analyze data during every step of their processes and bridging the gap between the digital and physical to maximize efficiency, productivity, and transparency. ... It is not an easy task to quantify how “intelligent” an enterprise is or how much the manufacturing and T&L space is changing to adopt IoT solutions. This intelligence cannot simply be determined by which technology solutions a company utilizes or how open-minded they are about new processes.


Why Cybersecurity Takeovers Are Surging As Stocks Reach New Highs

Cybersecurity investor Ron Gula noted that chatter of a forthcoming recession often allows private backers to put more pressure on startups to raise money, thus putting more pressure on them to cash out sooner. As more companies see rivals go the M&A or public route, “this can create a sense of urgency,” Gula told Fortune. Another factor driving the exit wave is the timing of the cybersecurity venture capital boom, which started about five years ago, making many companies ripe for an exit around the same time. Meanwhile, there are more potential buyers across industries. That's because companies not traditionally regarded as cybersecurity firms are looking to add the offering to their portfolios. “They see the benefit of saying, We have lots of data, we’re gonna look to add security to that data,” explained Enrique Salem, former CEO of cybersecurity company Symantec and a current investor at Bain Capital Ventures, per Fortune.


“This research highlights the fact that building a strong cyber security culture and subscribing to the right best practices can help organisations of any size maximise their security effectiveness,” said Wesley Simpson, (ISC)2 chief operating officer. “It’s a good reminder that in any partner ecosystem, the responsibility for protecting systems and data needs to be a collaborative effort, and multiple fail safes should be deployed to maintain a vigilant and secure environment. The blame game is a poor deterrent to cyber attacks.” Nearly two-thirds (64%) of large enterprises outsource at least a quarter (26%) of their daily business tasks, which requires them to allow third-party access to their data. These outsourced functions can include anything from research and development, to IT services and accounts payable. This data access and sharing is necessary as a large enterprise scales its operations, but the research indicates that access management and vulnerability mitigation is often overlooked.


Top 5 Aspects That Can Strengthen Your Data Governance Framework

There is a reason why the term ‘data dump’ is popular. The only job of a data source is to collect information and ‘dump’ it where you can access it. This is why businesses have to sift through petabytes of data just to find something meaningful to gain business insights from. It is only after this data has been categorized into usable, helpful portions that it starts being realized as an asset. Data quality is, therefore, the simple act of converting raw data into a usable form and maintaining it as an asset. Data governance helps you uncover new sources of information and draw better business value from your data. It can also identify broken/missing pieces of information and prevent duplicates from interfering with one another. Through data governance, outdated information can be flagged for attention, and critical data can be highlighted to the right teams within the organization. Broken links, incomplete files, incorrect prioritization, etc. are all incidents that greatly affect data quality. Data governance practices help fix such occurrences and also maintain it.



Quote for the day:


"Character matters; leadership descends from character." -- Rush Limbaugh


Daily Tech Digest - June 22, 2019

Why AI is here to stay

Image result for robot ai
So here’s why AI is not a fad: in real life, there’s no way I’m giving up my ability to fall back on teaching with examples if I’m not clever enough to come up with the instructions. Absolutely not! I’m pretty sure I use examples more than instructions to communicate with other humans when I stumble around the real world. AI means I can communicate with computers that second way — via examples — not only by instructions, are you seriously asking me to suddenly gag my own mouth? Remember, in the old days we had to rely primarily on instructions only because we couldn’t do it the other way, in part because processing all those examples would strain the meager CPUs of last century’s poor desktops. But now that humanity has unlocked its ability to express itself to machines via examples, why would we suddenly give that option up entirely? A second way of talking to computers is too important to drop like yesterday’s shoulderpads. What we should drop is our expectation that there’s a one-size-fits-all way of communicating with computers about every problem. Say what you mean and say it the way that works best.


Pledges to Not Pay Ransomware Hit Reality

"I don't think you can make a blanket statement of 'pay the ransom' or 'don't pay the ransom,'" says Adam Kujawa, director of the research labs at security firms Malwarebytes. "If you have failed to segment your data or your network, or failed to check your backups or other measures to get your company back on track quickly, then you will have to deal with the fallout." One problem for companies: Ransomware operators have shifted away from blanketing consumers and businesses with opportunistic ransomware attacks and now almost exclusively target business and municipalities. Along with that shift, the cost of ransoms has quickly grown because such organizations can afford to pay. Now, many organizations are faced with seven-digit ransom demands, Zelonis says. "That's a heck of a payday," he adds. The increase in ransom demands is driven by attackers' targeting and research on victims, he says.


End of the line for Internet Explorer 10 might mean updating embedded systems


Microsoft hasn't given specific dates yet; IE11 is coming to the Update Catalog sometime in spring 2019 (which likely means before the end of June), with the other upgrade options coming later in 2019. That means you won't have many months to test and validate IE11 on any systems where you're still using IE10, so you will want to plan your test labs and pilot rings now. Microsoft deliberately didn't put the new Edge browsing engine into IE11 because of enterprise concerns that it might cause compatibility problems. Instead, it still uses the Trident engine and includes document modes that emulate the IE5, IE7, IE8, IE9 and IE10 rendering engines. There are also specific Enterprise Modes to emulate IE8, and IE8 in Compatibility View, but if your sites worked in IE 10 you won't need those. What you will need to change are sites that have the x-ua-compatible meta tag or HTTP header set to 'IE=edge'; in IE10 that means Internet Explorer 10 mode, but in IE11 it means Internet Explorer 11 mode, because it's just asking for the latest IE version. Set it to 'IE=10' if the site has problems.


Expect graph database use cases for the enterprise to take off

As useful as graph databases are for many certain types of queries and analysis, graph tools will present several challenges to CIOs, Moore warned. Data engineers and business experts need to learn new skill sets and create new workflows for defining and refining the graph data models used for these applications. Classical SQL databases were optimized to conserve memory and CPU. They are still the best technology for many kinds of applications such as ERP that involve doing a lot of columnar addition. But joining database tables together to do new kinds of queries can add considerable overhead to SQL databases. As a result, new types of queries can be limited by memory capacity. In contrast, graph databases, as noted, precompute these relationships in a way that speeds analytics and shrinks the size of the data store. In one project, Moore said he managed to shrink a 5 TB SQL database into a 2 TB graph database. A big challenge that must be factored into graph database use cases is their slower performance when writing to the database.


7 Types Of Artificial Intelligence

uncaptioned
Since AI research purports to make machines emulate human-like functioning, the degree to which an AI system can replicate human capabilities is used as the criterion for determining the types of AI. Thus, depending on how a machine compares to humans in terms of versatility and performance, AI can be classified under one, among the multiple types of AI. Under such a system, an AI that can perform more human-like functions with equivalent levels of proficiency will be considered as a more evolved type of AI, while an AI that has limited functionality and performance would be considered a simpler and less evolved type. Based on this criterion, there are two ways in which AI is generally classified. One type is based on classifying AI and AI-enabled machines based on their likeness to the human mind, and their ability to “think” and perhaps even “feel” like humans. According to this system of classification, there are four types of AI or AI-based systems: reactive machines, limited memory machines, theory of mind, and self-aware AI.


The logic of digital change

Disruption may be a yawn, but the fact is that the internet is changing things slowly but surely, and specifically it began when cloud and APIs allowed start-ups to bootstrap and launch on a shoestring. Now, there are 12,000 start-ups globally getting investments that have been doubling down each year – $111.8 billion last year – and so there is something happening. Don’t be complacent. Nothing may have happened in the last quarter century but something will happen in the next, and only the banks that adapt will survive, as Charles Darwin would say. ... there is specifically a fourth revolution of humanity occurring where the people who historically could not be reached by banks are now being reached by technology. The financially illiterate, the folks who aren’t worth it, the financially vulnerable, the unbankable, are all getting to be included because that’s what digital does. In a world where we distribute money physically, you cannot afford to deal with someone in a remote African village; in a world where distribute money digitally, even the guy sitting in a village near the base camp of Mount Everest can trade and transact.


A.I. Ethics Boards Should Be Based on Human Rights


Human rights are imperfect ideals, subject to conflicting interpretations, and embedded in agendas with “outsized expectations.” Though supposedly global, human rights aren’t honored everywhere. Nevertheless, the United Nations Universal Declaration of Human Rights is the best statement ever crafted for establishing all-around social and legal equality and fundamental individual freedoms. The Institute of Electrical and Electronics Engineers rightly notes that human rights are a viable benchmark, even among diverse ethical traditions. “Whether our ethical practices are Western (Aristotelian, Kantian), Eastern (Shinto, Confucian), African (Ubuntu), or from a different tradition, by creating autonomous and intelligent systems that explicitly honor inalienable human rights and the beneficial values of their users, we can prioritize the increase of human well-being as our metric for progress in the algorithmic age.” Technology companies should embrace this standard by explicitly committing to a broadly inclusive and protective interpretation of human rights as the basis for corporate strategy regarding A.I. systems. They should only invite people to their A.I. ethics boards who endorse human rights for everyone.



Accelerating Digital Innovation Inside & Out


Not only are digitally maturing companies more likely to use cross-functional teams, those teams generally function differently in more mature organizations than in less mature organizations. They’re given greater autonomy, and their members are often evaluated as a unit. Participants on these teams are also more likely to say that their cross-functional work is supported by senior management. For more advanced companies, the organizing principle behind cross-functional teams is shifting from projects toward products. Digitally maturing companies are more agile and innovative, but as a result they require greater governance. Organizations need policies that create sturdy guardrails around the increased autonomy their networking strength allows. Digitally maturing companies are more likely to have ethics policies in place to govern digital business. Policies alone, however, are not sufficient. Only 35% of respondents across maturity levels say their company is talking enough about the social and ethical implications of digital business.


Three hacking trends you need to know about to help protect yourself


"The blurred lines between the techniques used by nation-state actors and those used by criminal actors have really gotten a lot fuzzier," says Jen Ayers, vice president of OverWatch cyber intrusion detection and security response at CrowdStrike. "Many criminal organisations are still very loud, but the fact is rather than going the traditional spam email route that they have been before, they are actively intruding onto enterprise networks, they are targeting unsecured web servers and going in, stealing credentials and doing reconnaissance," she adds. This is another tactic which malicious threat actors are beginning to deploy in order to both avoid detection and make attacks more effective – conducting campaigns that don't focus on Windows PCs and other common devices used in the enterprise. With these devices sitting in front of users every single day, and a top priority for antivirus software, there's a higher chance that an attack on these devices will either be prevented by security measures or spotted by users.


Data Strategy: Essential elements to enhance it

Elena Alfaro, head of data and open innovation at the client solutions division of the Spanish bank BBVA, described her organization's work of "spreading the culture of data" and ensuring that the senior leadership of an organization is on board with the data initiatives. "What I've learned is if the person you're sitting with doesn't understand, it is very difficult to get to something big," said Alfaro. For the past two years, Forrester has ranked the BBVA's mobile app the best in the banking business. Forrester's Aurelie L'Hostis credited the bank's app for "striking a superb balance between useful functionality and excellent user experience," a product that Alfaro says grew out of a data strategy with the end user in mind. "Digital banks listen to their customers, they're clever with data, and they work hard on making it easy for customers to manage their financial lives," L'Hostis writes. "It's not a small feat, but that's what your customers are demanding." But regardless of the industry, Wixom argues that companies with a successful data strategy implement a framework that ensures a high level of data integrity and makes sure that it is broadly and easily accessible.



Quote for the day:


"Each day you are leading by example. Whether you realize it or not or whether it's positive or negative, you are influencing those around you." -- Rob Liano


Daily Tech Digest - June 21, 2019

Defining a Test Strategy for Continuous Delivery

Image title
Defining the test cases requires a different mindset than implementing the code. It's better that the test cases are not defined by the same person that implemented the feature. Implementing good automated tests requires serious development skills. This is why, if there are people on the team that are just learning to code (for example testers that are new to test automation), it's a good idea to make sure that the team is giving them the right amount of support to skill up. This should be done through pairing, code review, knowledge sharing sessions. Remember that the entire team owns the codebase. Don't fall into the split ownership trap, in which production code is owned by the devs and test code is owned by the testers. This hinders knowledge sharing, introduces test case duplication and can lead to a drop in test code quality. Developers and testers are not the only ones that care about the quality. Ideally, the Product Owner should define most of the acceptance criteria. She is the one that has the best understanding of the problem domain and its essential complexity. So she should be a major contributor when writing acceptance criteria.



Blockchain expert Alex Tapscott sees coming crypto war as 'cataclysmic'

Digital technology has had a profound impact on virtually every aspect of our lives – except for banking. The institutions we rely on as trusted intermediaries to move, store and manage value, exchange financial assets, enable funding and investment and insure against risk, are more-or-less unchanged since the advent of the internet. This is changing, thanks to blockchain. Libra is only the latest in a wave of revolutionary new innovations that is beginning to disrupt the old model. Bitcoin remains the most consequential and important innovation in at least a generation. It laid the ground work for a new internet of value that promises to do to value industries, like financial services, what the internet did to information industries, like publishing. At first, the impact on banks will be muted. In fact, Facebook will need to rely on some existing banking infrastructure to successfully launch Libra. Over time, however, Libra could cut banks out of many aspects of the industry altogether. I share the same deep belief that Bitcoin will do the same.


The downfall of the virtual assistant (so far)

Virtual Assistant
We've talked plenty about the reasons why everyone and their mother wants you to get friendly with their flavor of robot aid — and why that, in turn, has led to what I call the post-OS era, in which a device's operating system is less important than the virtual assistant threaded throughout it. It's no coincidence that Google is slowly expanding Assistant into a platform of its own, and what we're seeing now is almost certainly just the tip of the iceberg. Something we haven't discussed much, though, is a painful reality that often gets overlooked in all the glowing coverage about this-or-that new virtual assistant gizmo or feature. And for anyone who ever tries to rely on this type of talking technology — be it for on-the-go answers from your phone, on-the-fly device control in your home, or hands-free help in your office — it's a reality that's all too apparent. The truth is, for all of their progress and the many ways in which they can be handy, voice assistants still fail far too frequently to be dependable. And the more Google and other companies push their virtual assistants and expand the areas in which they operate, the more pressing the challenge to correct this problem becomes.


Introduction to Reinforcement Learning


Why are we talking about all this? What does this mean to us, except that we need to have pets if we want to become a famous psychologist? What does this all have to do with artificial intelligence? Well, these topics explore a type of learning in which some subject is interacting with the environment. This is the way we as humans learn as well. When we were babies, we experimented. We performed some actions and got a response from the environment. If the response is positive (reward) we repeated those actions, otherwise (punishment) we stopped doing them. In this article, we will explore reinforcement learning, type of learning which is inspired by this goal-directed learning from interaction. ... Another type of learning is unsupervised learning. In this type of learning, the agent is provided only with input data, and it needs to make some sort of sense out of it. The agent is basically trying to find patterns in otherwise unstructured data. This type of problem is usually used for classification or clusterization types of problems.


Cyberwarfare escalation just took a new and dangerous turn


In the murky world of espionage and cyberwarfare, it's never entirely clear what's going on. Does the US really have the capabilities to install malware in Russian energy systems? If so, why would the intelligence agencies be comfortable (as they seem to be) with the story being reported? Is this an attempt to warn Russia and make its government worry about malware that might not even exist? But beyond the details of this particular story, there are at a number of major concerns here -- particularly around unexpected consequences and the escalation of cyberwarfare risks. It's very hard for a company (or a government) to tell the difference between hackers probing a network as part of general reconnaissance and the early stages of an attack itself. So even probing critical infrastructure networks could raise tensions. There's significant risk in planting malware inside another country's infrastructure with the aim of using it in future. The code can be discovered, which is at the very least embarrassing and, worse, could be seen as a provocation. It could even be reverse-engineered and used against the country that planted it.


Nutanix XI IoT: An Overview For Developers

By distributing the computing part of the problem to the edge, we can execute detection-decision-action logic with limited latency. For example, immediate detection might mean a defective product never leaves the production line, much less makes it to the customer. The consequences of receiving a defective item can range from inconvenient to catastrophic. If it is an article of clothing, the article might require a return. While this may have a range of negative consequences to the business, it does not compare to the consequences of having a defective part installed in an aircraft. Edge computing of data created by IoT edge devices can clearly benefit business, but as we mentioned earlier, as the number and diversity of devices grows, so does the workload for developers attempting to write applications for these devices. Configuring devices, networking devices, managing devices and data streams … these are all tasks that distract developers from the primary task at hand: creating the applications that use IoT data to serve the needs of your business.


Blockchain and AI combined solve problems inherent in each


Best known as the technology that powered bitcoin, blockchain offers an immutable record of every transaction, ensuring that all nodes have the same version of the truth and no records are tampered with. That makes it a relatively fail-safe and hack-proof method for storing and transferring monetary value. But to ensure this safety, the nodes have to go through huge calculations to ensure the validity of the transactions. Blockchain's mechanism for ensuring safety is also its weakness, as it limits scalability. The same is true for blockchain's immutability; every record needs to store the entire history of all transactions. The problems associated with AI are different. AI needs data to operate, but getting good data can be problematic. For instance, hackers can alter the data a machine is trained on with a data poisoning attack. Collecting data from clients is also problematic, especially in light of data privacy laws such as Europe's GDPR. Finally, most of the data needed for effective AI is owned by large organizations, such as Google and Facebook.



In an effort to ensure the UK’s resilience to attacks that exploit vulnerabilities in network-connected cameras, the SCC said the minimum requirements were an important step forward for manufacturers, installers and users alike. The work has been led by Mike Gillespie, cyber security advisor to the SCC and managing director of information security and physical security consultancy Advent IM, along with Buzz Coates, business development manager at CCTV distributor Norbain. The standard was developed in consultation with surveillance camera manufacturers Axis, Bosch, Hanwah, Hikvision and Milestone Systems. Speaking ahead of the official launch, Gillespie said that if a device came out of the box in a secure configuration, there was a good chance it would be installed in a secure configuration. “Encouraging manufacturers to ensure they ship their devices in this secure state is the key objective of these minimum requirements for manufacturers,” he said. Manufacturers benefit, said Gillespie, by being able to demonstrate that they take cyber seriously and that their equipment is designed and built to be resilient.


3 top soft skills needed by today’s data scientists


Data scientists who can understand the business context, plus the technical side of the equation, will be invaluable. This kind of “bilingual” talent can turn data streams into a predictive model, and then translate that model into a working reality, such as for financial forecasting. Core skills in storytelling, problem solving, agile development, and design thinking are critical to interoperating within different business contexts as well. The key is to develop T-shaped skillsets, as opposed to being I-shaped. While I-shaped people have a deep, narrow understanding of one area (like data engineering or data science), T-shaped people have both in-depth knowledge in one area and a breadth of understanding of several others. It is easier for T-shaped people to meld their data expertise to a broad range of use cases and industries. ... The communication side will be especially important as data expertise gets pulled into interdisciplinary use cases. Data scientists will have to be able to talk to people with different backgrounds. This goes back to the need to be more T-shaped to effectively translate highly technical ideas to different business contexts.


Using OpenAPI to Build Smart APIs for Dumb Machines

OpenAPI isn’t the only spec for describing APIs, but it is the one that seems to be gaining prominence. It started life as Swagger and was rebranded OpenAPI with its donation to the OpenAPI initiative. RAML and API Blueprint have their own adherents. Other folks like AWS, Google, and Palantir use their own API specs because they predate those other standards, had different requirements or found even opinionated specs like OpenAPI insufficiently opinionated. I’ll focus on OpenAPI here because its surging popularity has spawned tons of tooling. The act of describing an API in OpenAPI is the first step in the pedagogical process. Yes, documentation for humans to read is one obvious output, but OpenAPI also lets us educate machines about the use of our APIs to simplify things further for human consumers and to operate autonomously. As we put more and more information into OpenAPI, we can start the shift the burden from humans to the machines and tools they use. With so many APIs and so much for software developers to know, we’ve become aggressively lazy by necessity. APIs are a product; reducing friction for developers is a big deal.



Quote for the day:


"Be willing to make decisions. That's the most important quality in a good leader." -- General George S. Patton, Jr.