Daily Tech Digest - November 23, 2020

Superhuman resources: How HR leaders have redefined their C-suite role

CHROs have to be able to envision how the strategy will be executed, the talents and skills required to accomplish the work, and the qualities needed from leaders to maximize the organization’s potential. Increasingly, that requires a nuanced understanding of how technology and humans will interact. “HR leaders sit at a crossroads because of the rise of artificial intelligence and can really predict whether a company is going to elevate their humans or eliminate their humans,” said Ellyn Shook, the CHRO of professional-services firm Accenture. “We’re starting to see new roles and capabilities in our own organization, and we’re seeing a whole new way of doing what we call work planning. The real value that can be unlocked lies in human beings and intelligent technologies working together.” ... CHROs must operate at a slightly higher altitude than their peers on the leadership team to ensure that the different parts of the business work well together. At their best, these leaders view the entire organization as a dynamic 3D model, and can see where different parts are meshing well and building on other parts, and also where there are gaps and seams. The key is to make the whole organization greater than the sum of its parts.


Three IT strategies for the new era of hybrid work

While the hyper-automation strategy will make life much easier for IT teams by delivering on greater automated experiences, there will always be issues that humans will have to resolve. Organisations must equip their IT teams with the tools to handle these issues remotely and securely to succeed in an increasingly complex environment. This begins with utilising AI and building on deep learning capabilities that provide critical information to IT teams in real time. Say an employee is unable to access restricted customer information from his home network to complete a sales order and needs to enable VPN access. With the right software platforms, the IT representative will be able to guide him remotely, to push the necessary VPN software to his device, configure the necessary access information and provision his access through automation scripts. IT would also be able to discover the model of the router used in his home network if required and assist in router settings if the employee assigns the rights and authorisation. IT can also assess its vulnerabilities and advise the employee accordingly. In the past, the work would have to completed in the office. With hybrid work environments, going back to the office may not even be an option.


Security pros fear prosecution under outdated UK laws

MP Ruth Edwards, who previously led on cyber security policy for techUK, said: “The Computer Misuse Act, though world-leading at the time of its introduction, was put on the statute book when 0.5% of the population used the internet. The digital world has changed beyond recognition, and this survey clearly shows that it is time for the Computer Misuse Act to adapt. “This year has been dominated by a public health emergency – the coronavirus pandemic, but it has also brought our reliance on cyber security into stark relief. We have seen attempts to hack vaccine trials, misinformation campaigns linking 5G to coronavirus, a huge array of coronavirus-related scams, an increase in remote working and more services move online. “Our reliance on safe and resilient digital technologies has never been greater. If ever there was going to be a time to prioritise the rapid modernisation of our cyber legislation, and review the Computer Misuse Act, it is now,” she said. The study is the first piece of work to quantify and analyse the views of the wider security community in the UK on this issue, and the campaigners say they have found substantial concerns and confusion about the CMA that are hampering the UK’s cyber defences.


An In-Depth Explanation of Code Complexity

By knowing how many independent paths there are through a piece of code, we know how many paths there are to test. I'm not advocating for 100% code coverage by the way—that's often a meaningless software metric. However, I always advocate for as high a level of code coverage as is both practical and possible. So, by knowing how many code paths there are, we can know how many paths we have to test. As a result, you have a measure of how many tests are required, at a minimum, to ensure that the code's covered. ... By reducing software complexity, we can develop with greater predictability. What I mean by that is we're better able to say—with confidence—how long a section of code takes to complete. By knowing this, we're better able to predict how long a release takes to ship. Based on this knowledge the business or organization is better able to set its goals and expectations, especially ones that are directly dependent on said software. When this happens, it’s easier to set realistic budgets, forecasts, and so on. Helping developers learn and grow is the final benefit of understanding why their code is considered complex. The tools I've used to assess complexity up until this point don't do that. What they do is provide an overall or granular complexity score.


How DevOps Teams Get Automation Backwards

Do you know what data (and metadata) needs to be backed up in order to successfully restore? Do you know how it will be stored, protected and monitored? Does your storage plan comply with relevant statutes, such as CCPA and GDPR. Do you regularly execute recovery scenarios, to test the integrity of your backups and the effectiveness of your restore process? At the heart of each of the above examples, the problem is due in large part to a top-down mandate, and a lack of buy-in from the affected teams. If the DevOps team has a sense of ownership over the new processes, then they will be much more eager to take on any challenges that arise. DevOps automation isn’t the solution to every problem. Automated UI tests are a great example of an automation solution that’s right for some types of organizations, but not for others. These sorts of tests, depending on frequency of UI changes, can be fragile and difficult to manage. Therefore, teams looking to adopt automated UI testing should first assess whether the anticipated benefits are worth the costs, and then ensure they have a plan for monitoring and maintaining the tests. Finally, beware of automating any DevOps process that you don’t use on a frequent basis.


Security by Design: Are We at a Tipping Point?

A big contributor for security flat-footedness is the traditional “trust but verify” approach, with bolt-on and reactive architectures (and solutions) that make security complex and expensive. Detecting a threat, assessing true vs. false alerts, responding to incidents holistically and doing it all in a timely fashion demands a sizeable security workforce; a strong, well-practiced playbook; and an agile security model. As we have learned over the years, this has been hard to achieve in practice—even harder for small or mid-size organizations and those with smaller budgets. Even though dwell time has reduced in the last few years, attackers routinely spend days, weeks or months in a breached environment before being detected. Regulations like the EU General Data Protection Regulation (GDPR) mandate reporting of notifiable data breaches within 72 hours, even as the median dwell time stands at 56 days, rising to 141 days for breaches not detected internally. Forrester analyst John Kindervag envisioned a new approach in 2009, called “zero trust.” It was founded on the belief that trust itself represents a vulnerability and security must be designed into business with a “never trust, always verify” model.


Distributors adding security depth

“With the rapidly changing security landscape, and home working seemingly here to stay, this partnership will help organisations alleviate these security pressures through one consolidated cloud solution. Together with Cloud Distribution, we will continue to expand our UK Partner network, ensuring we are offering robust cloud security solutions with our approach that takes user organisations beyond events and alerts, and into 24/7 automated attack prevention,” he said.  Other distributors have also taken steps to add depth to their portfolios. Last month, e92plus also moved to bolster its offerings with the signing of web security player Source Defense. The distie is responding to the threats around e-commerce and arming resellers with tools to help customers that have been forced to sell online during the pandemic. The shift online has come as threats have spiked and the criminal activity around online transactions has increased. “As more businesses look to transact business online, bad actors are exploiting client-side vulnerabilities that aren’t protected by traditional solutions like web application firewalls,” said Sam Murdoch, managing director at e92cloud.


3 Steps CISOs Can Take to Convey Strategy for Budget Presentations

CISOs recognize they cannot reduce their organization's cyber-risk to zero. Still, they can reduce it as much as possible by focusing on eliminating the most significant risks first. Therefore, when developing a budget, CISOs should consider a proactive risk-based approach that homes in on the biggest cyber-risks facing the business. This risk-based approach allows the CISO to quantify the risk across all areas of cyber weakness, and then prioritize where efforts are best expended. This ensures maximum impact from fixed budgets and teams. The fact is, the National Institute of Standards and Technology reports that an average breach can cost an organization upward of $4 million — more costly than the overall budget for many organizations. Consider a scenario where one CISO invests heavily in proactive measures, successfully avoiding a major breach, while another invests primarily in reactive measures and ends up cleaning up after a major breach. The benefit is that one (the proactively inclined CISO) ends up spending 10x less overall. ... While there is more awareness among top leadership and board members regarding the daunting challenges of cybersecurity, a board member's view of cybersecurity is primarily concerned with cybersecurity as a set of risk items, each with a certain likelihood of happening with some business impact.


Keeping data flowing could soon cost billions, business warned

As soon as the UK leaves the EU, it will also cease to be part of the GDPR-covered zone – and other mechanisms will be necessary to allow data to move between the two zones. The UK government, for its part, has already green-lighted the free flow of digital information from the UK to the EU, and has made it clear that it hopes the EU will return the favor. This would be called an adequacy agreement – a recognition that UK laws can adequately protect the personal data of EU citizens. But whether the UK will be granted adequacy is still up for debate, with just over one month to go. If no deal is achieved on data transfers, companies that rely on EU data will need to look at alternative solutions. These include standard contractual clauses (SCCs), for example, which are signed contracts between the sender and the receiver of personal data that are approved by an EU authority, and need to be drawn for each individual data transfer. SCCs are likely to be the go-to data transfer mechanism in the "overwhelming majority of cases," according to the report, and drafting the contracts for every single relevant data exchange will represent a costly bureaucratic and legal exercise for many firms. UCL's researchers estimated, for example, that the London-based university would have to amend and update over 5,000 contracts.


Even the world’s freest countries aren’t safe from internet censorship

Ensafi’s team found that censorship is increasing in 103 of the countries studied, including unexpected places like Norway, Japan, Italy, India, Israel and Poland. These countries, the team notes, are rated some of the world’s freest by Freedom House, a nonprofit that advocates for democracy and human rights. They were among nine countries where Censored Planet found significant, previously undetected censorship events between August 2018 and April 2020. They also found previously undetected events in Cameroon, Ecuador and Sudan. While the United States saw a small uptick in blocking, mostly driven by individual companies or internet service providers filtering content, the study did not uncover widespread censorship. However, Ensafi points out that the groundwork for that has been put in place here. “When the United States repealed net neutrality, they created an environment in which it would be easy, from a technical standpoint, for ISPs to interfere with or block internet traffic,” she said. “The architecture for greater censorship is already in place and we should all be concerned about heading down a slippery slope.”



Quote for the day:

"Beginnings are scary, endings are usually sad, but it's the middle that counts the most." -- Birdee Pruitt

Daily Tech Digest - November 22, 2020

It's time for banks to rethink how they secure customer information

To sum it up, banks and credit card companies really don't care to put too much effort into securing the accounts of customers. That's crazy, right?  The thing is, banks and credit card companies know they have a safety net to prevent them from crashing to the ground. That safety net is fraud insurance. When a customer of a bank has their account hacked or card number stolen, the institution is fairly confident that it will get its--I mean, the customer's--money back. But wait, the revelations go even deeper. These same institutions also admit (not to the public) that hackers simply have more resources than they do. Banks and credit card companies understand it's only a matter of time before a customer account is breached--these institutions deal with this daily. These companies also understand the futility of pouring too much investment into stopping hackers from doing their thing. After all, the second a bank invests millions into securing those accounts from ne'er-do-wells, the ne'er-do-wells will figure out how to get around the new security methods and protocols. From the bank's point of view, that's money wasted. It's that near-nihilistic point of view that causes customers no end of frustration, but it doesn't have to be that way.


The New Elements of Digital Transformation

Even as some companies are still implementing traditional automation approaches such as enterprise resource planning, manufacturing execution, and product life cycle management systems, other companies are moving beyond them to digitally reinvent operations. Amazon’s distribution centers deliver inventory to workers rather than sending workers to collect inventory. Rio Tinto, an Australian mining company, uses autonomous trucks, trains, and drilling machinery so that it can shift workers to less dangerous tasks, leading to higher productivity and better safety. In rethinking core process automation, advanced technologies are useful but not prerequisites. Asian Paints transformed itself from a maker of coatings in 13 regions in India to a provider of coatings, painting services, design services, and home renovations in 17 countries by first establishing a common core of digitized processes under an ERP system. This provided a foundation to build upon and a clean source of data to generate insights. Later, the company incorporated machine learning, robotics, augmented reality, and other technologies to digitally enable its expansion.


AI startup Graphcore says most of the world won't train AI, just distill it

Graphcore is known for building both custom chips to power AI, known as accelerators, and also full computer systems to house those chips, with specialized software. In Knowles's conception of the pecking order of deep learning, the handful of entities that can afford "thousands of yotta-FLOPS" of computing power -- the number ten raised to the 24th power -- are the ones that will build and train trillion-parameter neural network models that represent "universal" models of human knowledge. He offered the example of huge models that can encompass all of human languages, rather like OpenAI's GPT-3 natural language processing neural network. "There won't be many of those" kinds of entities, Knowles predicted. Companies in the market for AI computing equipment are already talking about projects underway to use one trillion parameters in neural networks. By contrast, the second order of entities, the ones that distill the trillion-parameter models, will require far less computing power to re-train the universal models to something specific to a domain. And the third entities, of course, even less power. Knowles was speaking to the audience of SC20, a supercomputing conference which takes place in a different city each year, but this year is being held as a virtual event given the COVID-19 pandemic.


5 Reasons for the Speedy Adoption of Blockchain Technology

Blockchain technology can only handle three to seven transactions per second, while the legacy transaction processing system is able to process tens of thousands of them every second. This led many observers to be unsure of the potential of blockchain as a viable option for large-scale applications. However, recent developments have resulted in promising way to close this performance gap and a new consensus mechanism is being developed. This mechanism is enabling participants (some of who are unknown to each other) to trust the validity of the transactions. While the performance may be sluggish and a lot of computational resources may be spent in the mechanism involving blockchain, the better performance is the key that is popularizing the use of the blockchain technology. Latest designs are aiming to reduce the time and energy intensive mining required to validate every transaction. Various blockchain-based applications are able to choose between performance, functionality, and security to suit what is most appropriate for the application. This consensus model is being especially appreciated in industries like auto-leasing, insurance, healthcare, supply chain management, trading, and more.


How next gen Internal Audit can play strategic role in risk management post-pandemic

The purpose of a business continuity plan is to ensure that the business is ready to survive a critical incident. It permits an instantaneous response to the crisis so as to shorten recovery time and mitigate the impact. This pandemic has conferred an unprecedented “critical incident” for the globe. With unknown reach and period, worldwide implications, and no base for accurate projections, we are very much into unchartered territories. Many organizations used to develop a disaster recovery plan and business continuity procedure that was rarely put to the test in a real crisis situation. With the arrival of newer risks e.g. cyber-attacks, data transfer confidentiality issues struggle with maintaining supply levels, workforce management, physical losses, operational disruptions, change of marketing platforms, increased volatility and interdependency of the global economy, etc. the traditionally accepted Business Continuity & Crisis Management Models are getting continuously & constructively challenged rapidly. Therefore, organizations need adequate planning resulting in immediate response, better decision-making, maximum recovery, effective communications, and sound contingency plans for various scenarios that may suddenly arise.


How to Build a Production Grade Workflow with SQL Modelling

A constructor creates a test query where a common table expression (CTE) represents each input mock data model, and any references to production models (identified using dbt’s ‘ref’ macro) are replaced by references to the corresponding CTE. Once you execute a query, you can compare the output to an expected result. In addition to an equality assertion, we extended our framework to support all expectations from the open-source Great Expectations library to provide more granular assertions and error messaging. The main downside to this framework is that it requires a roundtrip to the query engine to construct the test data model given a set of inputs. Even though the query itself is lightweight and processes only a handful of rows, these roundtrips to the engine add up. It becomes costly to run an entire test suite on each local or CI run. To solve this, we introduced tooling both in development and CI to run the minimal set of tests that could potentially break given the change. This was straightforward to implement with accuracy because of dbt’s lineage tracking support; we simply had to find all downstream models (direct and indirect) for each changed model and run their tests.


Google Services Weaponized to Bypass Security in Phishing, BEC Campaigns

For its part, Google stresses the company is taking every measure to keep malicious actors off their platforms. “We are deeply committed to protecting our users from phishing abuse across our services, and are continuously working on additional measures to block these types of attacks as methods evolve,” a Google spokesperson told Threatpost by email. The statement added that Google’s abuse policy prohibits phishing and emphasized that the company is aggressive in combating abuse. “We use proactive measures to prevent this abuse and users can report abuse on our platforms,” the statement said. “Google has strong measures in place to detect and block phishing abuse on our services.” Sambamoorthy told Threatpost that the security responsibility does not rest on Google alone and that organizations should not rely solely on Google’s security protections for their sensitive data. “Google faces a fundamental dilemma because what makes their services free and easy to use also lowers the bar for cybercriminals to build and launch effective phishing attacks,” he said. “It’s important to remember that Google is not an email security company — their primary responsibility is to deliver a functioning, performant email service.”


Democratize Data to Empower your Organization and Unleash More Value

Organizations, unsure whether they can trust their data, limit access, instead of empowering the whole enterprise to achieve new insights for practical uses. To drive new value—such as expanded customer marketing and increasing operational efficiencies—democratizing data demands building out a trusted, governed data marketplace, enabling mastered and curated data to drive your innovations that leapfrog the competition. To do this, trust assurance has become the critical enabler. But how to accomplish trust assurance? Trust Assurance Helps You Accelerate Reliable Results So, what is trust assurance, and how can data governance help accelerate it? If an organization is to convert data insights into value that drives new revenue, improves customer experience, and enables more efficient operations, the data needs controls to help ensure it’s both qualitative for reliable results as well as protected for appropriate, and compliant, use. According to IDC, we’re seeing a 61 percent compound annual growth rate (CAGR) in worldwide data at this moment—a rate of increase that will result in 175 zettabytes of data worldwide by 2025. 


DDoS mitigation strategies needed to maintain availability during pandemic

According to Graham-Cumming, enterprises should start the process of implementing mitigating measures by conducting thorough due diligence of their entire digital estate and its associated infrastructure, because that is what attackers are doing. “The reality is, particularly for the ransomware folks, these people are figuring out what in your organisation is worth attacking,” he says.“It might not be the front door, it might not be the website of the company as that might not be worth it – it might be a critical link to a datacentre where you’ve got a critical application running, so we see people doing reconnaissance to figure out what the best thing to attack is. “Do a survey of what you’ve got exposed to the internet, and that will give you a sense of where attackers might go. Then look at what really needs to be exposed to the internet and, if it does, there are services out there that can help.” This is backed up by Goulding at Nominet, who says that while most reasonably mature companies will have already considered DDoS mitigation, those that have not can start by identifying which assets they need to maintain availability for and where they are located.


Empathy: The glue we need to fix a fractured world

Our most difficult moments force us to contend with our vulnerability and our mortality, and we realize how much we need each other. We’ve seen this during the pandemic and the continued struggle for racial justice. There has been an enormous amount of suffering but also an intense desire to come together, and a lot of mutual aid and support. This painful moment has produced a lot of progress and clarity around our values. Yet modern life, especially in these pandemic times, makes it harder than ever to connect with each other, and this disconnectedness can erode our empathy. But we can fight back. We can work to empathize more effectively. The pandemic, the economic collapse associated with it, and the fight for racial justice have increased all sorts of feelings, including empathy, anger, intolerance, fear, and stress. A big question for the next two to five years is which tide will prevail. ... Another problem is that there’s tribalism within organizations, especially larger organizations and those that are trying to put different groups of people with different goals under a single tent. For instance, I’ve worked with companies that include both scientists and people who are trying to market the scientists’ work. 



Quote for the day:

"Superlative leaders are fully equipped to deliver in destiny; they locate eternally assigned destines." -- Anyaele Sam Chiyson

Daily Tech Digest - November 21, 2020

How phishing attacks are exploiting Google's own tools and services

Armorblox's co-founder and head of engineering, Arjun Sambamoorthy, explains that Google is a ripe target for exploitation due to the free and democratized nature of many of its services. Adopted by so many legitimate users, Google's open APIs, extensible integrations, and developer-friendly tools have also been co-opted by cybercriminals looking to defraud organizations and individuals. Specifically, attackers are using Google's own services to sneak past binary security filters that look for traffic based on keywords or URLs. ... cybercriminals spoof an organization's security administration team with an email telling the recipient that they've failed to receive some vital messages because of a storage quota issue. A link in the email asks the user to verify their information in order to resume email delivery. The link in the email leads to a phony login page hosted on Firebase, Google's mobile platform for creating apps, hosting files and images, and serving up user-generated content. This link goes through one redirection before landing on the Firebase page, confusing any security product that tries to follow the URL to its final location. As it's hosted by Google, the parent URL of the page will escape the notice of most security filters.


Women in Data: How Leaders Are Driving Success

Next-gen analytics have helped to shift perception and enable the business to accelerate the use of data, according to panelist Barb Latulippe, Sr. Director Enterprise Data at Edward Life Sciences, who emphasized the trend toward self-service in enterprise data management. The days of the business going to IT are gone—a data marketplace provides a better user experience. Coupled with an effort to increase data literacy throughout the enterprise, such data democratization empowers users to access the data they need themselves, thanks to a common data language. This trend was echoed by panelist Katie Meyers, senior vice president at Charles Schwab responsible for data sales and service technologies. A data leader for 25 years, Katie focused on the role cloud plays in enabling new data-driven capabilities. Katie emphasized that we’re living in a world where data grows faster than our ability to manage the infrastructure. By activating data science and artificial intelligence (AI), Charles Schwab can leverage automation and machine learning to enable both the technical and business sides of the organization to more effectively access and use data. 


Developer experience: an essential aspect of enterprise architecture

Code that provides the structure and resources to allow a developer to meet their objectives with a high degree of comfort and efficiency is indicative of a good developer experience. Code that is hard to understand, hard to use, fails to meet expectations and creates frustration for the developer is typical of a bad developer experience. Technology that offers a good developer experience allows a programmer to get up and running quickly with minimal frustration. A bad developer experience—one that is a neverending battle trying to figure out what the code is supposed to do and then actually getting it to work—costs time, money, and, in some cases, can increase developer turnover. When working with a company’s code is torturous enough, a talented developer who has the skills to work anywhere will do take one of their many other opportunities and leave. There is only so much friction users will tolerate. While providing a good developer experience is known to be essential as one gets closer to the user of a given software, many times, it gets overlooked at the architectural design level. However, this oversight is changing. Given the enormous demand for more software at faster rates of delivery, architects are paying attention.


ISP Security: Do We Expect Too Much?

"The typical Internet service provider is primarily focused on delivering reliable, predictable bandwidth to their customers," Crisler says. "They value connectivity and reliability above everything else. As such, if they need to make a trade-off decision between security and uptime, they will focus on uptime." To be fair, demand for speed and reliable connections was crushing many home ISPs in the early days of the pandemic. For some, it remains a serious strain. "In the early weeks of the pandemic, when people started using their residential connections at once, ISPs were faced with major outages as bandwidth oversubscription and increased botnet traffic created serious bottlenecks for people working at home," says Bogdan Botezatu, director of threat research and reporting at Bitdefender. ISPs' often aging and inadequately protected home hardware presents many security vulnerabilities as well. "Many home users rent network hardware from their ISP. These devices are exposed directly to the Internet but often lack basic security controls. For example, they rarely if ever receive updates and often leave services like Telnet open," says Art Sturdevant, VP of technical operations at Internet device search engine Censys. "And on devices that can be configured using a Web page, we often see self-signed certificates, a lack of TLS for login pages, and default credentials in use."


Can private data as a service unlock government data sharing?

Data as a service (DaaS), a scalable model where many analysts can access a shared data resource, is commonplace. However, privacy assurance about that data has not kept pace. Data breaches occur by the thousands each year, and insider threats to privacy are commonplace. De-identification of data can often be reversed and has little in the way of a principled security model. Data synthesis techniques can only model correlations across data attributes for unrealistically low-dimensional schemas. What is required to address the unique data privacy challenges that government agencies face is a privacy-focused service that protects data while retaining its utility to analysts: private data as a service (PDaaS). PDaaS can sit atop DaaS to protect subject privacy while retaining data utility to analysts. Some of the most compelling work to advance PDaaS can be found with projects funded by the Defense Advanced Research Projects Agency’s Brandeis Program, ... According to DARPA, “[t]he vision of the Brandeis program is to break the tension between: (a) maintaining privacy and (b) being able to tap into the huge value of data. Rather than having to balance between them, Brandeis aims to build a third option – enabling safe and predictable sharing of data in which privacy is preserved.”


How to Create High-Impact Development Teams

Today’s high-growth, high-scale organizations must have well-rounded tech teams in place -- teams that are engineered for success and longevity. However, the process of hiring for, training and building those teams requires careful planning. Tech leaders must ask themselves a series of questions throughout the process: Are we solving the right problem? Do we have the right people to solve these problems? Are we coaching and empowering our people to solve all aspects of the problem? Are we solving the problem the right way? Are we rewarding excellence? Is 1+1 at least adding up to 2 if not 3? ... When thinking of problems to solve for the customers -- don’t constrain yourself by the current resources. A poor path is to first think of solutions based on resource limitations and then find the problems that fit those solutions. An even worse path is to lose track of the problems and simply start implementing solutions because “someone” asked for it. Instead, insist on understanding the actual problems/pain points. Development teams who understand the problems often come back with alternate, and better, solutions than the initial proposed ones. 


Apstra arms SONiC support for enterprise network battles

“Apstra wants organizations to reliably deploy and operate SONiC with simplicity, which is achieved through validated automation...Apstra wants to abstract the switch OS complexity to present a consistent operational model across all switch OS options, including SONiC,” Zilakakis said. “Apstra wants to provide organizations with another enterprise switching solution to enable flexibility when making architecture and procurement decisions.” The company’s core Apstra Operating System (AOS), which supports SONIC-based network environments, was built from the ground up to support IBN. Once running it keeps a real-time repository of configuration, telemetry and validation information to constantly ensure the network is doing what the customer wants it to do. AOS includes automation features to provide consistent network and security policies for workloads across physical and virtual infrastructures. It also includes intent-based analytics to perform regular network checks to safeguard configurations. AOS is hardware agnostic and integrated to work with products from Cisco, Arista, Dell, Juniper, Microsoft and Nvidia/Cumulus.


New EU laws could erase its legacy of world-leading data protection

As the European Union finalises new digital-era laws, its legacy of world-leading privacy and data protection is at stake. Starting next week, the European Commission will kick off the introduction of landmark legislative proposals on data governance, digital market competition, and artificial intelligence. The discussions happening now and over the next few months have implications for the future of the General Data Protection Regulation and the rights this flagship law protects. With Google already (predictably) meddling in the debate, it is imperative that regulators understand what the pitfalls are and how to avoid them. ... The first new legislation out of the gate will be the Data Governance Act, which the European Commission is set to publish on November 24. According to Commissioner Thierry Breton, the new Data Strategy aims to ensure the EU “wins the battle of non-personal data” after losing the “race on personal data”. We strongly object to that narrative. While countries like the US have fostered the growth of privacy-invasive data harvesting business models that have led to repeated data breaches and scandals such as Cambridge Analytica, the EU stood against the tide, adopting strong data protection rules that put people before profits.


The journey to modern data management is paved with an inclusive edge-to-cloud Data Fabric

We want everything to be faster, and that’s what this Data Fabric approach gets for you. In the past, we’ve seen edge solutions deployed, but you weren’t processing a whole lot at the edge. You were pushing along all the data back to a central, core location -- and then doing something with that data. But we don’t have the time to do that anymore. Unless you can change the laws of physics -- last time I checked, they haven’t done that yet -- we’re bound by the speed of light for these networks. And so we need to keep as much data and systems as we can out locally at the edge. Yet we need to still take some of that information back to one central location so we can understand what’s happening across all the different locations. We still want to make the rearview reporting better globally for our business, as well as allow for more global model management. ... Typically, we see a lot of data silos still out there today with customers – and they’re getting worse. By worse, I mean they’re now all over the place between multiple cloud providers. I may use some of these cloud storage bucket systems from cloud vendor A, but I may use somebody else’s SQL databases from cloud vendor B, and those may end up having their own access methodologies and their own software development kits (SDKs).


Rebooting AI: Deep learning, meet knowledge graphs

"Most of the world's knowledge is imperfect in some way or another. But there's an enormous amount of knowledge that, say, a bright 10-year-old can just pick up for free, and we should have RDF be able to do that. Some examples are, first of all, Wikipedia, which says so much about how the world works. And if you have the kind of brain that a human does, you can read it and learn a lot from it. If you're a deep learning system, you can't get anything out of that at all, or hardly anything. Wikipedia is the stuff that's on the front of the house. On the back of the house are things like the semantic web that label web pages for other machines to use. There's all kinds of knowledge there, too. It's also being left on the floor by current approaches. The kinds of computers that we are dreaming of that can help us to, for example, put together medical literature or develop new technologies are going to have to be able to read that stuff. We're going to have to get to AI systems that can use the collective human knowledge that's expressed in language form and not just as a spreadsheet in order to really advance, in order to make the most sophisticated systems."



Quote for the day:

"To have long term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley