Daily Tech Digest - August 22, 2020

There is a crisis of face recognition and policing in the US

When Jennifer Strong and I started reporting on the use of face recognition technology by police for our new podcast, “In Machines We Trust,” we knew these AI-powered systems were being adopted by cops all over the US and in other countries. But we had no idea how much was going on out of the public eye.  For starters, we don’t know how often police departments in the US use facial recognition for the simple reason that in most jurisdictions, they don’t have to report when they use it to identify a suspect in a crime. The most recent numbers are speculative and from 2016, but they suggest that at the time, at least half of Americans had photos in a face recognition system. One county in Florida ran 8,000 searches each month. We also don’t know which police departments have facial recognition technology, because it’s common for police to obscure their procurement process. There is evidence, for example, that many departments buy their technology using federal grants or nonprofit gifts, which are exempt from certain disclosure laws. In other cases, companies offer police trial periods for their software that allow officers to use systems without any official approval or oversight.


Outlook “mail issues” phishing – don’t fall for this scam!

Only if you were to dig into the email headers would it be obvious that this message actually arrived from outside and was not generated automatically by your own email system at all. The clickable link is perfectly believable, because the part we’ve redacted above (between the text https://portal and the trailing /owa, short for Outlook Web App) will be your company’s own domain name. But even though the blue text of the link itself looks like a URL, it isn’t actually the URL that you will visit if you click it. Remember that a link in a web page consists of two parts: first, the text that is highlighted, usually in blue, which is clickable; second, the destination, or HREF (short for hypertext reference), where you actually go if you click the blue text. ... One tricky problem for phishing crooks is what to do at the end, so you don't belatedly realise it's a scam and rush off to change your password (or cancel your credit card, or whatever it might be). In theory, they could try using the credentials you just typed in to login for you and then dump you into your real account, but there's a lot that could go wrong. The crooks almost certainly will test out your newly-phished password pretty soon, but probably not right away while you are paying attention and might spot any anomalies that their attempted login might cause.


Taking on the perfect storm in cybersecurity

The future of cybersecurity depends on a platform approach. This will allow your cybersecurity teams to focus on security rather than continue to integrate solutions from many different vendors. It allows you to keep up with digital transformation and, along the way, battle the perfect storm. Our network perimeters are typically well-protected, and organizations have the tools and technologies in place to identify threats and react to them in real-time within their network environments. The cloud, however, is a completely different story. There is no established model for cloud security. The good news is that there is no big deployment of legacy security solutions in the cloud. This means organizations have a chance to get it right this time. We can also fix how to access the cloud and manage security operations centers (SOCs) to maximize ML and AI for prevention, detection, response and recovery. Cloud security, cloud access and next-generation SOCs are interrelated. Individually and together, they present an opportunity to modernize cybersecurity. If we build the right foundation today, we can break the pattern of too many disparate tools and create a path to consuming cybersecurity innovations and solutions more easily in the future.


FBI and CISA warn of major wave of vishing attacks targeting teleworkers

Collected information included: name, home address, personal cell/phone number, the position at the company, and duration at the company, according to the two agencies. The attackers than called employees using random Voice-over-IP (VoIP) phone numbers or by spoofing the phone numbers of other company employees. "The actors used social engineering techniques and, in some cases, posed as members of the victim company's IT help desk, using their knowledge of the employee's personally identifiable information—including name, position, duration at company, and home address—to gain the trust of the targeted employee," the joint alert reads. "The actors then convinced the targeted employee that a new VPN link would be sent and required their login, including any 2FA or OTP." When the victim accessed the link, for the phishing site hackers had created, the cybercriminals logged the credentials, and used it in real-time to gain access to the corporate account, even bypassing 2FA/OTP limits with the help of the employee. "The actors then used the employee access to conduct further research on victims, and/or to fraudulently obtain funds using varying methods dependent on the platform being accessed," the FBI and CISA said.


Why you need to revisit your IT policies

Part of that proactive planning should be adjustments to your IT policies. These documents are often forgotten until they're most needed, and the recent rushed transition from office work to remote work likely highlighted this condition. In the rushed transition, imagine how helpful it would have been to have some basic policy guidance on what equipment is supported for remote work, what items are reimbursable and where they can be sourced, and which software was recommended. If nothing else, some simple policies and guidance around these topics probably would have saved your already-stretched support staff dozens of phone calls and emails. ... At their best, policies provide guidance based on organizational priorities and experience, and at their worst, they are an extensive list of "Thou Shalt Nots" that assume your colleagues are nefarious scallywags one step away from destroying the organization should you not be there to preempt each of their misguided notions. Many employees dislike policy documents since they bias toward the latter, and unsurprisingly when you treat your colleagues like children and scoundrels, they'll rise to the occasion.


Styles, protocols and methods of microservices communication

For those who choose to stick with asynchronous protocols, consider exploring the advanced message queuing protocol (AMQP). This widely available and mature protocol provides a standard method for microservices communication and should be a priority for those developing truly composite microservices apps. Asynchronous protocols like AMQP use a lightweight service bus similar to a service-oriented architecture (SOA) bus, though much less complex. Unlike HTTP, this bus provides a message broker that acts as an intermediary between the individual microservices, thus avoiding the problems associated with a brokerless approach. Keep in mind, however, that a message broker will introduce extra steps that can add latency. The individual services still contain their functional and operational logic, and will need time to process that logic. The bus simply helps standardize and throttle those communications. Major cloud platforms, such as Azure, provide their own proprietary service bus for message brokering. However, there are also third-party options such as RabbitMQ, an open source message broker based in the Erlang programming language.


Edge computing: 4 problems it helps solve for enterprises

Enterprises in the construction, manufacturing, mining, and oil and gas industries, for example, are embracing the edge, which enables them to run the core elements of any solution locally by empowering local devices to save their state, interact with each other, and send important alerts and notifications. “This means that even if the internet goes down the factory, warehouse, construction site, mine, or field, edge processing continues to work full steam ahead,” Allsbrook says. ... Edge computing can minimize the network and bandwidth issues associated with moving large amounts of data to or from IoT devices and reduce reliance on the network. Companies look to edge solutions that can process data at the source and provide summary information on what’s going on. This eliminates the need for expensive SIM cards, data plans, and other network costs if the data were to have to be transported from the device to a network. “Edges can use simple ‘if-then’ logic or advanced AI algorithms to understand and build those summary reports,” explains Allsbrook of ClearBlade.


The Great Reset requires FinTechs – and FinTechs require a common approach to cybersecurity

Established financial services providers have a number of frameworks, standards and industry-driven initiatives available to test the security of FinTechs and other third parties. However, the volume of industry initiatives – driven by the pace of technological change and the multiplication of regulations – is now creating “noise”. This makes it difficult for FinTechs to direct their resources in a way that allows for security while also facilitating commercial partnerships. Requirements placed on FinTechs sow confusion, increase costs and may incentivise “security through obscurity”, in which less well-resourced firms play a game of chance, betting that they’re too small to be targeted by attackers and setting themselves up for problems in the future. ... The sector needs a mutually understood and widely accepted base level of cybersecurity controls. Clarity at the base level of security will support effective protection of business and client assets across the wider supply chain. This can accelerate the speed at which FinTechs can come to market and create commercial partnerships – and, in turn, incentivise good cyber hygiene


IBM Finds Flaw in Millions of Thales Wireless IoT Modules

The modules, which IBM describes as mini circuit boards, enable 3G or 4G connectivity, but also store secrets such as passwords, credentials and code, according to Adam Laurie, X-Force Red's lead hardware hacker, and Grzegorz Wypych, senior security consultant, who wrote a blog post. "This vulnerability could enable attackers to compromise millions of devices and access the networks or VPNs supporting those devices by pivoting onto the provider's backend network," Laurie and Wypych write. "In turn, intellectual property, credentials, passwords and encryption keys could all be readily available to an attacker." In a statement, Thales says "it takes the security of its products very seriously and therefore has, after communicating and discussing this issue with affected customers, delivered software fixes in Q1/2020." The modules run microprocessors with an embedded Java ME interpreter and use flash storage. Also, there are Java "midlets" that allow for customization. One of those midlets copies custom Java code added by an OEM to a secure part of the flash memory, which should only be in write mode so that code can be written there but not read back.


How to manage unstructured data using an ECM system

Structured data is information governed by a database structure, organized into defined fields, usually within the context of a relational database. The database structure requires that data in the fields follow a prescribed format. For example, a date must have the format of a date and a name must be limited in length. The most common place that people encounter structured data is in the cells of a spreadsheet. Structured data has many applications within businesses and is easy to search. It is found in finance, customer relationship management, supply chain and other applications where compliance to structures is keyed to business tasks. Unstructured data, on the other hand, is data without rules and is not as searchable. Users who create unstructured data are writing free-form, rather than complying with structured data fields. There is minimal enforcement of any rules on the length of content, the format of the content or what content goes where. Despite the lack of formal structure, unstructured information -- which users create in word processing programs, spreadsheets, presentation files, PDFs, social media feeds, and audio and video files -- forms the bulk of the data created in an organization.



Quote for the day:

"When you expect the best from people, you will often see more in them than they see in themselves." -- Mark Miller

Daily Tech Digest - August 21, 2020

How healthcare IT can be kept smart

As with many industries, the healthcare sector has seen a rapid phase of digitalisation, with new connected medical devices intertwining patient treatment with IT infrastructure that was traditionally separate from day to day healthcare practice. There can be no doubt this has boosted efficiencies and had a positive impact on patient care. However, digitalisation comes with a catch. With so many new connected devices, today’s hospital IT networks have more potential points of failure than ever before. As with any information system, the storage and transfer of data is at the heart of all healthcare IT systems. Most if not all medical IoT devices rely on data and information being readily available through various points in the hospital network. For example, a radiologist will routinely require access to patient imaging records in order to review scans that have been automatically uploaded to the system by an MRI machine. To facilitate this degree of connectivity, most hospitals have what is called an integration engine. This is a central IT communications hub that securely stores and distributes information and data where and when it is needed. Think of the integration engine as the hospital’s central nervous system, facilitating all communications across the network.


Why Innovation Takes More Than Genius

It’s easy to look at someone like Steve Jobs or Elon Musk and imagine that their success was inevitable. Their accomplishments are so out of the ordinary that it just seems impossible that they could have ever been anything other than successful. You get the sense that whatever obstacles they encountered, they would overcome. Yet it isn’t that hard to imagine a different path. If, for example, Jobs had remained in Homs, Syria, where he was conceived, it’s hard to see how he would have ever been able to become a technology entrepreneur at all, much less a global icon. If Apartheid never ended, Musk’s path to Silicon Valley would be much less likely as well. The truth is that genius can be exceptionally fragile. Making a breakthrough takes more than talent. It requires a mixture of talent, luck and an ecosystem of support to mold an idea into something transformative. In fact, in my research of great innovators what’s amazed me the most is how often they almost drifted into obscurity. Who knows how many we have lost? On a January morning in 1913, the eminent mathematician G.H. Hardy opened his mail to find a letter written in almost indecipherable scrawl from a destitute young man in India named Srinivasa Ramanujan.


Systems integrators are evolving from tech experts to business strategists

Nigel Fenwick, vice president and principal analyst at Forrester, said that systems integrators (SIs) have been investing in emerging technologies and developing software to accelerate time to value for clients.  "There's demand in IT transformations for SIs and service providers to help clients architect their technology so that the business can evolve with new technologies even faster," he said. "Modern system architectures make it easier for services firms to connect systems through APIs and microservices than it used to be." Adya shared a project Infosys completed with a large retailer as one example of this orchestration approach. The client wanted to solve an employee experience problem focused on accessing personal data such as salary information, leave time, and bonus information. Each type of information lived in its own silo, requiring multiple log-ins and creating an unpleasant experience. Infosys combined multiple data sets into a single interface that employees and temp workers access by typing in an employee number.  "This solved an experience problem that involved integrating the back end and the front end and building a platform," he said.


A Robust Cybersecurity Policy is Need of the Hour: Experts

“There has been a recent surge in cyberattacks on Indian digitalscape that are only increasing in scope and sophistication, targeting sensitive personal and business data and critical information infrastructure, with an impact on national economy and security. ... And while formulation and adoption of policies might still take time, this is a clarion call to the Indian internet users to pay attention to the threats, on creating robust ‘firewalls’, and conducting regular cybersecurity and data protection audits.” – Nikhil Korgaonkar, regional director, India and SAARC, Arcserve “With cyberattacks increasingly becoming sophisticated, cybersecurity and digitization cannot and should not exist in silos. What we need now is a robust cybersecurity roadmap that will address the gaps and provide us a strong cyber-armor. Covid-19 situation has only accelerated the pace of digitization, potentially amplifying these security concerns. It is time for businesses to take advantage of approaches like micro-segmentation, encryption and dynamic isolation, enhanced by the power of emerging technologies like AI and ML to up their cybersecurity game.” – Sumed Marwaha, regional services vice president and managing director, Unisys India


3 Huge Ways Companies Are Delighting Customers With Artificial-Intelligence-Driven Services

Driven by the likes of Netflix, this notion of customization and personalization is a major business trend. If your customers don’t already expect a more intelligent, personalized service offering, they soon will do. If you aren’t able to offer such a service, rest assured your competitors will. (And, increasingly, that competition may come from the tech sector itself. Consider the rise of personal finance apps that are seriously challenging traditional banking service providers.) We tend to think of retail as a product-based industry, but in fact, it perfectly illustrates this move towards more personalized services. Amazon was an early pioneer of data-driven, personalized shopping recommendations, but now a wave of new services has sprung up to offer a similarly tailored approach for consumers. Stitch Fix, which delivers hand-picked clothing to your door, is a great example. With Stitch Fix, you detail your size, style preferences, and lifestyle in a questionnaire. Then, using AI, the system pre-selects clothes that will fit and suit you, and a (human) personal stylist chooses the best options from that pre-selected list. And voila, the perfect clothes for you arrive at your door every month. 


Easy Interpretation of a Logistic Regression Model with Delta-p Statistics

Imagine a situation where a credit customer applies for a credit, the bank collects data about the customer - demographics, existing funds, and so on - and predicts the credit-worthiness of the customer with a machine learning model. The customer’s credit application is rejected, but the banker doesn’t know why exactly. Or, a bank wants to advertise their credits, and the target group should be those who eventually can get a credit. But who are they? In these kinds of situations, we would prefer a model that is easy to interpret, such as the logistic regression model. The Delta-p statistics makes the interpretation of the coefficients even easier. With Delta-p statistics at hand, the banker doesn’t need a data scientist to be able to inform the customer, for example, that the credit application was rejected, because all applicants who apply credit for education purposes have a very low chance of getting a credit. The decision is justified, the customer is not personally hurt, and he or she might come back in a few years to apply for a mortgage.


Shifting Left: The Evolving Role of Automation in DevOps Tools

Advanced automation tools eliminate the manual and time-consuming configuration per project within DevOps, thereby removing the friction between developers and DevOps teams when needing to add scanning steps into the jobs of all CI pipelines. Adding jobs or steps to scan code is challenging using the traditional CI-scan model. Advanced automation tools ultimately break down barriers between teams and allow them to play better together and achieve true DevSecOps integration. At the end of the day, shifting left and automating your CI/CD pipeline will dramatically improve the integration of security within the SDLC. Organizations can instantly onboard their development, security, and operations teams and simplify the governance of their security policies and DevSecOps processes. The traditional AST solution providers are leaving developers behind because without the ability to scan source code directly in your environment, you’re left having to manually process scans — leaving a lot of room for marginal error and adding a lot of time to your end-delivery date. If I can leave you with one thing, it’s that integration is key to automation and the tools you use should enable the most shift left approach possible, where automation can occur within the SDLC — changing the way AST solutions are embedded within all DevOps environments.


GPT-3 Is an Amazing Research Tool. But OpenAI Isn’t Sharing the Code.

At its heart, GPT-3 is an incredibly powerful tool for writing in the English language. The most important thing about GPT-3 is its size. GPT-3 learned to produce writing by analyzing 45 terabytes of data, and that training process reportedly cost millions of dollars in cloud computing. It has seen human writing in billions of combinations. This is a key part of OpenAI’s long-term strategy. The firm has been saying for years that when it comes to deep learning algorithms, the bigger the better. More data and more computing power make a more capable algorithm. For instance, when OpenAI crushed professional esports players at Dota 2, it was due to its ability to train algorithms on hundreds of GPUs at the same time. It’s something OpenAI leaders have told me previously: Jack Clark, policy director for OpenAI, said that the bigger the algorithm, the “more coherent, more creative, and more reliable” it is. When talking about the amount of training the Dota 2 bots needed, CTO Greg Brockman said, “We just kept waiting for the magic to run out. We kept waiting to hit a wall, and we never seemed to hit a wall.” A similar approach was taken for GPT-3. 


Indian leaders say upskilling key cybersecurity challenge: Microsoft

The pandemic had direct implications on cybersecurity budgets and staffing, with 33 per cent business leaders in India reporting a 25 per cent budget increase for security. More than half (54 per cent) of the leaders in the country said that they would hire additional security professionals in their security team. “A vast majority (70 per cent) of leaders in India stated that they plan to speed up deployment of Zero Trust capabilities to reduce risk exposure,” the findings showed. Globally, 90 per cent of businesses have been impacted by phishing attacks with 28 per cent admitted to being successfully phished. Notably successful phishing attacks were reported in significantly higher numbers from organizations that described their resources as mostly on-premise (36 per cent) as opposed to being more cloud-based. In response to Covid-19, more than 80 per cent of companies added security jobs. While 58 per cent of companies reported an increase in security budgets globally, 65 per cent reported an increase in compliance budgets. “The shift to remote work is fundamentally changing security architecture,” said the survey.


How to manage your edge infrastructure and devices

“Firstly, lack of external network connectivity to a device making it necessary to process data at the edge. Typically, this has been due to difficult environments or security requirements. Secondly, a need for speed that prevents sending data through a network due to latency, where moving the data costs more in terms of time than having the processing power of a data centre or the cloud available. “This is absolutely true for certain use cases. On the factory floor, for example, there is a desire to prevent network connectivity from bringing an entire plant down. In fact, in many factories, the level of bandwidth currently available can often be too low to have all equipment sending data back to the data centre. In this case, it is critical to place analytics tools at the edge with no disruption, sitting the algorithm next to the hardware. “However, for businesses with non-critical use cases, this is changing. Over time, the drivers behind the need for edge analytics have changed as network speed and connectivity become faster and more prevalent. As such, the roundtrip of data to the network – which is going faster every day – will not hinder digital progress and thus businesses are increasingly happy to manage infrastructure and devices in this way.”



Quote for the day:

“I’m convinced that about half of what separates successful entrepreneurs from non-successful ones is pure perseverance.” -- Steve Jobs

Daily Tech Digest - August 20, 2020

11 penetration testing tools the pros use

Formerly known as BackTrack Linux and maintained by the good folks at Offensive Security (OffSec, the same folks who run the OSCP certification), Kali is optimized in every way for offensive use as a penetration tester. While you can run Kali on its own hardware, it's far more common to see pentesters using Kali virtual machines on OS X or Windows. Kali ships with most of the tools mentioned here and is the default pentesting operating system for most use cases. Be warned, though--Kali is optimized for offense, not defense, and is easily exploited in turn. Don't keep your super-duper extra secret files in your Kali VM. ... Why exploit when you can meta-sploit? This appropriately named meta-software is like a crossbow: Aim at your target, pick your exploit, select a payload, and fire. Indispensable for most pentesters, metasploit automates vast amounts of previously tedious effort and is truly "the world's most used penetration testing framework," as its website trumpets. An open-source project with commercial support from Rapid7, Metasploit is a must-have for defenders to secure their systems from attackers.


The Role of Business Analysts in Agile

A few things that we as BA Managers need to be aware of include: Understanding of the role - because of a BA’s ability to be a flexible, helpful and an overall "fill-in-the-gaps" person, the role of the BA gets blurrier and blurrier. This is what makes it interesting and also so great when it comes to working within an agile team. Ultimately it also makes it complicated to explain to others, especially those unfamiliar with the role. If it is complicated to explain, it is easy for people to underestimate the value it brings so make sure you are clear in your "pitch" of what your BAs do! Being pigeonholed into the role - if you are a great BA, nobody wants to lose you so they will continue giving you BA work even if you want to go into something else like project management. It is key for those managing BAs to actively support their career aspirations even if they are outside of the discipline, and to lobby on their behalf. Hitting an analysis complexity "ceiling" - if you are constantly with your team and helping them solve delivery problems, it is very hard to dedicate focused analysis time on upcoming large initiatives.


Cisco bug warning: Critical static password flaw in network appliances needs patching

The flaws reside in the Cisco Discovery Protocol, a Layer 2 or data link layer protocol in the Open Systems Interconnection (OSI) networking model. "An attacker could exploit these vulnerabilities by sending a malicious Cisco Discovery Protocol packet to the targeted IP camera," explains Cisco in the advisory for the flaws CVE-2020-3506 and CVE-2020-3507. "A successful exploit could allow the attacker to execute code on the affected IP camera or cause it to reload unexpectedly, resulting in a denial-of-service (DoS) condition." The Cisco cameras are vulnerable if they are running a firmware version earlier than 1.0.9-4 and have the Cisco Discovery Protocol enabled. Again, customers need to apply Cisco's update to protect the model because there's no workaround. This bug was reported to Cisco by Qian Chen of Qihoo 360 Nirvan Team. However, Cisco notes it is not aware of any malicious activity using this vulnerability.  The second high-severity advisory concerns a privilege-escalation flaw affecting the Cisco Smart Software Manager On-Prem or SSM On-Prem. It's tracked as CVE-2020-3443 and has a severity score of 8.8 out of 10.


Fuzzing Services Help Push Technology into DevOps Pipeline

"Fuzzing by its very nature is this idea of automated continuous testing," he says. "There is not a lot of human input that is necessary to gain the benefits of fuzz testing in your environment. It's a good fit from the idea of automation and continuous testing, along with this idea of continuous development." Many companies are aiming to create agile software development processes, such as DevOps. Because this change often takes many iterative cycles, advanced testing methods are not usually given high priority. Fuzz testing, the automated process of submitting randomized or crafted inputs into the application, is one of these more complex techniques. Even within the pantheon of security technologies, fuzzing is often among the last adopted. Yet, 2020 may be the year that changes. Major providers and even frameworks have focused on making fuzzing easier, says David Haynes, a product security engineer at Cloudflare. "I think we are just getting started in terms of seeing fuzzing becoming a bit more mainstream, because the biggest factor hindering (its adoption) was available tooling," he says. "People accept that integration testing is needed, unit testing is needed, end-to-end testing is needed, and now, that fuzz testing is needed."


Why We Need Lens as a Kubernetes IDE

The current version of Lens vastly improves quality of life for developers and operators managing multiple clusters. It installs on Linux, Mac or Windows desktops, and lets you switch from cluster to cluster with a single click, providing metrics, organizing and exposing the state of everything running in the cluster, and letting you edit and apply changes quickly and with assurance. Lens can hide all the ephemeral complexity of setting up cluster access. It lets you add clusters manually by browsing to their kubeconfigs, and can automatically discover kubeconfig files on your local machine. You can manage local or remote clusters of virtually any flavor, on any infrastructure or cloud. You can also organize clusters into workgroups any way you like and interact with these subsets. This capability is great for DevOps and SREs managing dozens or hundreds of clusters or just helping to manage cluster sprawl. Lens installs whatever version of kubectl is required to manage each cluster, eliminating the need to manage multiple versions directly. It works entirely within the constraints each cluster’s role-based access control (RBAC) imposes on identity, so Lens users (and teams of users) can see and interact only with permitted resources.


Computer scientists create benchmarks to advance quantum computer performance

The computer scientists created a family of benchmark quantum circuits with known optimal depths or sizes. In computer design, the smaller the circuit depth, the faster a computation can be completed. Smaller circuits also imply more computation can be packed into the existing quantum computer. Quantum computer designers could use these benchmarks to improve design tools that could then find the best circuit design. “We believe in the ‘measure, then improve’ methodology,” said lead researcher Jason Cong, a Distinguished Chancellor’s Professor of Computer Science at UCLA Samueli School of Engineering. “Now that we have revealed the large optimality gap, we are on the way to develop better quantum compilation tools, and we hope the entire quantum research community will as well.” Cong and graduate student Daniel (Bochen) Tan tested their benchmarks in four of the most used quantum compilation tools. Tan and Cong have made the benchmarks, named QUEKO, open source and available on the software repository GitHub.


Starting strong when building your microservices team

We’re used to hearing the slogan ‘Go big or go home’, but businesses would do well to think small when developing microservices. Here, developing manageable and reusable components will enable companies, partners and customers to use individual microservices across an entire landscape of applications and industries. In doing so, businesses aren’t restricting themselves to siloed applications. In addition, driving success with microservices involves considerable planning to ensure that nothing is left out. After all, microservices-based architecture consists of many moving parts and so developers should be mindful to guarantee service interactions are seamless from start to finish. The pandemic has shone a spotlight on the role of digital transformation in building up crisis resilience. Consequently, businesses are turning en masse to digital and the market is evolving apace. However, as operational and business models shift, companies must be mindful to avoid becoming locked-in to cloud vendor technologies and platforms in such a rapidly changing market. When working with a cloud partner, implementing their platform and other solutions shouldn’t be a given – while such tools will likely work fine in their own cloud environment, companies should be wary of how they will operate elsewhere.


From Legacy to Intelligent ERP: A Blueprint for Digital Transformation

Today’s ERP configuration is for running today’s business. Most run in the data center and capture, manage, and report on all core business transactions. Tomorrow’s intelligent ERP goes far beyond this charter. If you want to be part of the team transforming the business, then you should understand the vision of where the company is targeting growth over the next several years. What markets, products, and services are the priorities? What operations need to scale? What improvements in workflows can free up cash or make financial forecasting more reliable? How can you empower employees, teams, and departments to work efficiently, safely, and effectively as some people return to the office and others work remotely? Intelligent ERPs not only centralize operational workflows and data from sales, marketing, finance, and operations. These RPS also extend data capture, workflow, and analytics around prospects and customers and their experiences interacting with the business. When fully implemented, they enable a full 360-degree view of the customer across all areas of the company that interface with them from marketing to sales, through digital commerce, and from any customer support activities.


Researchers improve perception of robots with new hearing capabilities

Working out of the Robotics Institute at Carnegie Mellon University, Pinto, as well as fellow researchers Dhiraj Gandhi and Abhinav Gupta, presented their findings during the virtual Robotics: Science and Systems conference last month. The three started the project last June, according to a release from the university. "We present three key contributions in this paper: (a) we create the largest sound-action-vision robotics dataset; (b) we demonstrate that we can perform fine grained object recognition using only sound; and (c) we show that sound is indicative of action, both for post-interaction prediction, and pre-interaction forward modeling," they write in the study. "In some domains like forward model learning, we show that sound in fact provides more information than can be obtained from visual information alone." In the published study, the three researchers said sounds did help a robot differentiate between objects and predict the physical properties of new objects. They also found that hearing helped robots determine what type of action caused a particular sound. Robots using sound capabilities were able to successfully classify objects 76% of the time, according to Pinto and the study.


Running Axon Server in Docker and Kubernetes

“Breaking down the monolith” is the new motto, as we finally get driven home the message that gluttony is also a sin in application land. If we want to be able to change in step with our market, we need to increase our deployment speed, and just tacking on small incremental changes has proven to be a losing game. No, we need to reduce interdependencies, which ultimately also means we need to accept that too much intelligence in the interconnection layer worsens the problem rather than solving it, as it sprinkles business logic all over the architecture and keeps creating new dependencies. Martin Fowler phrased it as “Smart endpoints and dumb pipes”, and as we do this, we increase application components’ autonomy and you’ll notice the individual pieces can finally start to shrink. Microservices architecture is a consequence of an increasing drive towards business agility, and woe to those who try to reverse that relationship. Imposing Netflix’s architecture on your organization to kick-start a drive for Agile development can easily destroy your business.



Quote for the day:

"Leadership is like beauty; it's hard to define, but you know it when you see it." -- Warren Bennis

Daily Tech Digest - August 19, 2020

Why Board Directors And CEOs Must Become AI Literate To Lead Forward

Unfortunately, many companies have been lured into AI programs with black box AI practices, meaning clear accountabilities are not easily evident, transparent, let alone audited to manage risk. Board directors and CEOs know where their employees are located, whether they are working remotely or in an office, who to contact for customer service or personal issues. Yet, I don’t know of one global company where a board director or a CEO can produce in less than five minutes, a comprehensive list of all their AI algo/AI model assets across their enterprise operations and know the last revision model date, and have robust risk classification evidence, verified by third party-auditors. With the democratization of data which is the foundation of AI enablement, AI and machine learning (ML) KPI’s must be elevated to have more importance like our Financial KPIs, deriving increased transparency, like auditors have been disciplined with fiduciary accountability of profit and loss statements.... Few companies have mature AI centers of excellence where machine learning operations (MLOps) is a competency center, although many companies are now starting to invest in ML Ops.


Look Upstream to Solve Your Team's Reliability Issues

Dan believes that one of the most important steps in upstream thinking isn’t system related. They’re human. As people will be the ones solving these issues, we are the first piece to the puzzle, and the most crucial. There’s a way to do this well. Dan notes that you should try to“...surround the problem with the right people; give them early notice of that problem, and align their efforts toward preventing specific instances of that problem". For example, you might be bogged down with incidents and unable to tackle the action items stemming from incident retrospectives and operational reviews. These action items sit in the backlog and are not planned for any sprints. To change this, you’ll need to get buy-in from many stakeholders. You’ll need engineers, managers, product teams, and the VP of engineering on board. “Once you’ve surrounded the problem, then you need to organize all those people’s efforts. And you need an aim that’s compelling and important — a shared goal that keeps them contributing even in stressful situations,” Dan says. Once your team is ready to embark on this journey upstream, you’ll need to work on actually changing the system. 



Speed up your home office: How to optimize your network for remote work and learning

Until recently, home internet providers have rarely spent much time discussing the upload bandwidth they allocate to each customer. ... Working from home, getting that upload bandwidth has been problematic. The various broadband reps I've spoken to over the years have told me that very few people ever even ask about upload bandwidth, which is why ISP's have never offered much capacity. Of course, because of COVID-19, all that is changing rapidly. Before COVID, most users were surfing the web, watching YouTube or Netflix, or playing games. Little upload capacity was needed. Now everybody's on Zoom all the time. When you're on Zoom, you need broadband capacity to send video upstream just as much as you need broadband capacity to watch video. ... the really big issue you should be concerned about is upload capacity when it comes to online learning and work-based video conferencing. I know families of six where the two adults and four kids all used to go to either the office or school -- and who are now all at home, and who all need to be in Zoom conferences at the same time. As the following chart shows, it doesn't matter that you have 100Mbps down, according to your plan, if all you have is 5Mbps up. With 5Mbps up, you can -- barely -- sustain one Zoom stream.


Exclusive: 5 principles of creative disruption

Whatever line of innovation you’re in, there’s a tip that comes in handy, time and time again: sell the problem you solve, not the product. Think of your most-used mobile app (Google Maps, WhatsApp, Tinder perhaps) and, odds are, it transformed something that you found to be boring, awkward or time-consuming into a much better experience. For ActiveQuote, the idea was sparked by an irritating problem. There was nothing in health insurance that compared to what consumers were able to do with car insurance. “Deep frustration can become an entrepreneur’s inspiration,” says Theo. ... Ten years from now, no insurtech wants to be described as ‘very 2020s’. Keeping the customer journey fresh is just as important as maintaining the quality of the product. It’s something that ActiveQuote is keen to stay on top of. “A key challenge to address with any kind of online quote or application journey is customer drop-off,” says Jones. “Traditional, form-based application pages are fine for desktops. However, with the increase in mobile usage, a mobile-specific journey was imperative, especially with complex products such as health insurance.”


FinTech Leaders Should Embrace This Two-Pronged Strategy to Survive the Pandemic

More broadly, FinTech leaders can identify fertile ground by asking, “What processes can my company automate within the financial services industry that previously were managed by humans?” Other opportunities for innovation uncovered by the pandemic include technology solutions pertaining to safety, fraud, and remote banking and payments. The majority of the technological changes spurred by the pandemic will not be reversed, even after the virus is long-gone. Therefore, FinTech leaders should think beyond solutions that are just stop-gaps. The second prong of your survival strategy should be adopting a business model that will generate predictable revenue over the long-term. Rather than depending exclusively on one-time sales, delivery execution and post-support, financial technology providers should be positioning for the coveted, ever-present and predictable recurring revenue model. Subscription-based, service bureau, and software or platform-as-a-service offerings are clearly leading against capital expenditures and premise-based investments. Rapidly proliferating, these types of solutions are driving the technological transition of a wide spectrum of enterprise-level B2B and B2C organizations. 


Getting Started - AI Image Classification With TensorFlow.js

TensorFlow.js offers surprisingly good performance because it uses WebGL (a JavaScript graphics API) and thus is hardware-accelerated. To get even more improved performance, you can use tfjs-node (the Node.js version of TensorFlow). TensorFlow.js allows you to load pre-trained models right in your browser. If you have trained a TensorFlow or Keras model offline in Python, you can save it to a location available to the web and load it into the browser for inference. You can also use different libraries to include features such as image classification and pose detection without having to train your model from scratch. We will see an example of such a scenario later in the series. TensorFlow.js also allows you to use transfer learning to elevate an existing model using a small amount of data collected in the browser. This allows us to train an accurate model quickly and efficiently. We will also see an example of transfer learning in action later in the series. At the most basic level, you can use TensorFlow.js to define, train, and run models entirely in the browser. As mentioned earlier, using TensorFlow.js means that you can create and run AI models in a static HTML document with no installation required. At the time of writing,


Digital Strategy In A Time Of Crisis

The priority is to protect employees and ensure business continuity. To achieve this, it is essential to continue adapting the IT infrastructure needed for massive remote working and to continue the deployment of the collaborative digital systems. Beyond these new challenges, the increased risks related to cybersecurity and the maintenance of IT assets, particularly the application base, require vigilance. After responding to the emergency, the project portfolio and the technological agenda must be rethought. This may involve postponing or freezing projects that do not create short-term value in the new context. Conversely, it is necessary to strengthen transformation efforts capable of increasing agility and resilience, in terms of cybersecurity, advanced data analysis tools, planning, or even optimisation of the supply chain. value. The third major line of action in this crucial period of transition is to tighten human resources management, focusing on the large-scale deployment of agile methods, the development of sensitive expertise such as data science, artificial intelligence or cybersecurity. The war for talent will re-emerge in force when the recovery comes, and it is therefore important to strengthen the attractiveness of the company.


The Attack That Broke Twitter Is Hitting Dozens of Companies

A security staffer at one targeted organization who asked that WIRED not use his name or identify his employer described a more wholesale approach: At least three callers appeared to be working their way through the company directory, trying hundreds of employees over just a 24-hour period. The organization wasn't breached, the staffer said, thanks to a warning that the company had received from another target of the same hacking campaign and passed on to its staff prior to the hacking attempts. "They just keep trying. It's a numbers game," he says. "If we hadn’t had a day or two's notice, it could have been a different story." Phone-based phishing is hardly a new practice for hackers. But until recently, investigators like Allen and Nixon say, the attacks have focused on phone carriers, largely in service of so-called "SIM swap" attacks in which a hacker would convince a telecom employee to transfer a victim's phone service to a SIM card in their possession. They'd use that phone number to intercept two-factor authentication codes, or as a starting point to reset the passwords to cryptocurrency exchange accounts.


IoT governance: how to deal with the compliance and security challenges

“A good way to deal with IoT governance is to have a board as a governance structure. Proposals are presented to the board, which is normally made up of 6-12 individuals who discuss the merits of any new proposal or change. They may monitor ongoing risks like software vulnerabilities by receiving periodic vulnerability reports that include trends or metrics on vulnerabilities. Some boards have a lot of authority, while others may act as an advisory function to an executive or a decision maker,” Wagner advises. ... Instead of focusing on “beefing up” data security, organisation’s should prioritise data privacy in any governance program. She explains that at “the heart of IoT is the concept of the always-connected customer. Organisations are looking to capture, share and use the large volumes of customer data generated to drive a competitive edge. “The problem is that under GDPR the definition of data privacy is broad, which may find many in hot water as they come to adopt IoT. This is because the regulation places far-reaching responsibilities on organisation’s to impose a specific ‘privacy by design’ requirement. What this means is that organisations must have in place the appropriate technical and organisational measures to ensure that data privacy is not an afterthought....”


Identity Mismanagement: Why the #1 Cloud Security Problem Is about to Get Worse

There are two major reasons why IAM is more difficult today than it has been before. One is the sheer scale of cloud deployments; the other is the increased frequency of identity-based cyberattacks. Let's take the problem of scale first. According to recent research, enterprises in 2017 expected to use an average of 17 cloud applications to support their IT, operations, and business strategies. So, it’s no surprise that 61 percent of respondents believe identity and access management (IAM) is more difficult today than it was even those two short years ago. With so many different systems in play at any one time, IAM is no longer just about having a rigorous tracking and authentication system in place. In many organizations, the computing cost of authentication and encryption now forms the primary bottleneck on network performance. The second reason why contemporary IAM is more difficult is the dramatic rise in cyberattacks based on compromising identity systems. A decade ago, most cybersecurity analysts were primarily focused on securing data against direct intrusion and theft attempts. 



Quote for the day:

"Let us never negotiate out of fear. But let us never fear to negotiate." -- John F. Kennedy

Daily Tech Digest - August 18, 2020

How eSIMs will aid mass market IoT development

This latest iteration of the ubiquitous SIM card, which has played a fundamental role in mobile telecommunications for over a quarter of a century, enables the SIM to be downloaded into a ‘Secure Element’ that can be permanently embedded inside any type of device, or thing. eSIMs can act as an authenticating party between the hardware device and service platform, to ensure end-to-end, chip-to-cloud security. Data can then be encrypted to protect against loss, theft, or tampering, with encryption available via zero-touch provisioning. ... This is the second wave of e-SIM hype. The initial industry hype a couple of years ago around the embedded version of the SIM card did not live up to preliminary expectations, in large part because the supply was there, but the demand was not. However, we are now seeing a resurgence, due to the demand increasing as IoT technologies mature and more – different – industries enter the IoT, and security increases due to legislation. An increasing number of operators too, are beginning to realise the cost benefits of these types of connections, opening up their networks to unlock the advantages of bundled, multi-device subscription plans, and new revenue opportunities, which is further driving demand. 


Combining DataOps and DevOps: Scale at Speed

We need to step away from organizing our teams and technologies around the tools we use to manage data, such as application creation, information management, identity access management and analytics and data science. Instead, we need to realize that data is a vital commodity, and to put together all those that use or handle data to take a data-centric view of the enterprise. When building applications or data-rich systems, development teams learn to look past the data delivery mechanics and instead concentrate on the policies and limitations that control data in their organization, they can align their infrastructure more closely to enable data flow across their organization to those who need it. To make the shift, DataOps needs teams to recognize the challenges of today's technology environment and to think creatively about specific approaches to data challenges in their organization. For example, you might have information about individual users and their functions, data attributes and what needs to be protected for individual audiences, as well as knowledge of the assets needed to deliver the data where it is required. Getting teams together that have different ideas helps the company to evolve faster. Instead of waiting minutes, hours or even weeks for data, environments need to be created in minutes and at the pace required to allow the rapid creation and delivery of applications and solutions.


Deepening Our Understanding Of Good Agile: General Issues

Kuhn distanced himself from the idea that a new theory in science was about the discovery of objective truth. Instead, he viewed each new scientific revolution or synthesis as “less problematic” and “more fruitful” than the previous synthesis, with fewer anomalies and greater predictive power and maybe greater simplicity and clarity. For example, Copernicus’s heliocentric theory of the galaxy had no greater predictive power than the previous earth-centric theory. But it won support because it was simpler and seemed more plausible. As it turned out, Copernicus’s theory involved the idea of rotating spheres which was dead wrong, but the heliocentric part turned out to be right. The theory won broad support, despite its flaws. It is in this sense that we should not be expecting to discover a theory of management that explains the objective truth about management or that prescribes the perfect organizational structure. We should be content if we can find a synthesis that has fewer anomalies and greater predictive power than the previous synthesis. That is so a fortiori for management compared to physical science, because human society is constantly changing, unlike the physical universe. So there is even less likelihood of attaining even temporary truth about the human universe.


Firms Still Struggle to Prioritize Security Vulnerabilities

The underlying problem is that once vulnerabilities have been identified by automated systems, the prioritization and patching process is mostly manual, which slows an organization's response, says Charles Henderson, global managing partner and head of IBM's cybersecurity services team, X-Force Red. "You think of vulnerability management as 'find a flaw, fix a flaw,'" he says. "The problem is that we have gotten really good at finding flaws, and we haven't seen ... as an industry the same attention paid to actually fixing stuff that we find." Patching continues to be a significant problem for most companies. Only 21% of organizations patch vulnerabilities in a timely manner, the survey found. More than half of companies cannot easily track how efficiently vulnerabilities are being patched, have enough resources to patch the volume of issues, nor have a common way of viewing assets and applications across the company. In addition, most organizations do not have the ability to tolerate the necessary downtime. Overall, most companies face significant challenges in patching software vulnerabilities, according to the survey of 1,848 IT and IT professionals by the Ponemon Institute for its State of Vulnerability Management in the Cloud and On-Premises report.


Cloudops tool integration is more important than the tools themselves

What’s missing is direct integration between the AIops tool and the security tool. Although they have different missions, they need each other. The security tool needs visibility into the behavior of all applications and infrastructure, considering that behaviors that are out of line with normal operations can often be tracked to security issues, such as DDoS attacks. At the same time, the cloudops tool could play some role in automatically defending the cloud-based systems, such as attempting a restart or taking other corrective action so the issue does not result in an outage. The recovery could be reported back to the security tool, which would take further action, such as blocking the IP address that is the source of the DDoS attack. This example describes security and ops tools working together, but there is much value in other tool integration as well. Configuration management, testing, special-purpose monitoring such as edge computing and IoT, data governance, etc., can all benefit from working together to create common automation between tools. The smarter cloud management and monitoring players, especially those selling AIops tools, have largely gotten the tool integration religion. 


How Active Cypher is Securing Enterprises from Malware Attacks

The cautious CIO should take the approach that their organization is already infected with ransomware. For the majority of ransomware attacks, user’s negligence is the problem. If a firm has employees, its only time until they get ransomware. Yet IT departments should stop playing roulette hoping that they are not the ones to fall this month, but should instead take a proactive approach to first securing their data end-to-end, through automated file-level encryption like what is offered through Active Cypher File Fortress. Secondly, they should utilize solutions like Ransom Data Guard that effectively shields clients from all permutations of ransomware attacks like WannaCry, RobbinHood, TeslaCrypt…by obfuscating data and actively countering malware when it attempts to attack. Employee cyber-training only gets you so far. ... The success of India’s economy and the rise of its companies have unfortunately led hackers to increasingly attack the country. Active Cypher’s Indian clients are addressed in a similar fashion as we currently handle global and non-North American clients – our product is not intensive in prep or installation and company IT teams can download and install very easily in half a day.


How robotics and automation could create new jobs in the new normal

“Contrary to some beliefs, I see robots as creating vast amounts of new jobs in the future,” he said. “Just like 50 years ago a website designer, vlogger, or database architect were not things, over the next 50 years we will see many new types of job emerge.” Nicholson cites robot pilots as an example. “Ubiquitous, truly autonomous robots are still a long way from reality, so with semi-autonomous capabilities with humans in the loop, we can achieve much better performance overall and generate a brand-new job sector,” he added. There’s a growing consensus that humans will work in conjunction with robots, performing complementary roles that play to their respective strengths. ... The robots generate a significant amount of performance data, which is automatically compiled into reports that need to be interpreted, assessed, and analyzed to improve operation and fleet performance. While much of this work could be incorporated into existing roles, such tasks may eventually require dedicated employees, leading to the creation of new jobs. “Managers can view the routes being cleaned, take a look at quantitative metrics such as run time and task frequency, and receive notifications around diagnostics and relevant software updates,” Spruijt said.


The Security Interviews: How Crest is remaking the future of consultancy

Now that the security marketplace has grown significantly and security services providers have gone from boutique outfits to big-name brands, this need is becoming greater than ever, says Glover. He adds that buyers are now realising that if they contract their security services to structured organisations that back up their technology claims with certified skills and best practice, they get better outcomes. He also reckons that security consultancy will soon begin to move from an advisory-based practice to an opinion-based practice. “We haven’t really done that as an industry yet, but I absolutely believe that is the direction of play,” he says. But what does that actually mean? Glover explains: “Right now, we provide advice and guidance. We look at your systems and we say ‘that’s not very good – you should correct it’. That’s advice. But what we’re now seeing under GDPR [General Data Protection Regulation] and other regulations is you are asked if you have taken appropriate steps to secure your data, otherwise the regulator is going to take regulatory action or fine you a lot of money. “So we are now moving into this area where security consultants have to be professional auditors and say, in our professional opinion, this organisation has or has not taken appropriate steps to secure its data. ...”


What working from home means for CISOs

It’s easy to understand why employees do what they do. CISOs have always had trouble convincing them that productivity and protection are not mutually exclusive — that users can do their jobs just as effectively by following policies, accepting security controls and using pre-approved apps and devices, and especially while working from home, the shift to productivity at all costs has threatened to disrupt this delicate balance. It comes as cyber criminals look to capitalise on distracted home workers, unprotected endpoints, overwhelmed VPNs, and distributed security teams who may be forced to focus on more pressing operational IT tasks. Google is blocking as many as 18 million Covid-themed malicious and phishing emails every day. It takes just one to get through and convince a remote worker to click, and the organisation may be confronted with the prospect of a debilitating ransomware outage, BEC-related financial loss, or damaging data breach. With many organisations struggling financially in the wake of government-mandated lockdowns, few will welcome the costs associated with a serious security incident. 


Web of Things Over IoT and Its Applications

Internet connectivity is a minor concern for low-level sensors or hardware devices. Low level sensors such as temperature sensor, and motion sensor, usually transfer data using low level protocols like Bluetooth Low Energy (BLE), Zigbee, 6LoWPAN, etc., which are not Internet compatible. Since IoT Gateways understand those low level protocols, they basically play the role of adapters between the internet and those sensors. Protocol transformation would also take place here. IoT gateways are installed inside smart homes, smart factories etc., i.e., inside Local Area Network where no unified communication standard is available, thus, those gateways can be used to communicate using proprietary data format over the internet. Additionally, there are multiple cloud vendors that are providing IoT services in different shapes and textures. Once again there is a lack of standardization. AWS Alexa is tied with Philips Hue so AWS and Hue can understand their data format but no one else can. This is gravitating towards the vendor lock-in black hole. To get rid of this problem, IoT needs vendor neutral standards for the internet.



Quote for the day:

"Leadership is the art of influencing people to execute your strategic thinking." -- Nabil Khalil Basma

Daily Tech Digest - August 17, 2020

Remote DevOps is here to stay!

With a mass exodus of the workforce towards a home setting, especially in India, the demand for skilled professionals in DevOps has dramatically increased. A recent GitHub report, on the implications of COVID on the developer community, suggests that developer activities have increased as compared to last year. This also translates to the fact that developers have shown resilience and continued to contribute, undeterred by the crisis. This is the shining moment for DevOps which is built for remote operations. In a ‘choose your own adventure’ situation, DevOps helps organizations evaluate their own goals, skills, bottlenecks, and blockers to curate a modern application development and deployment process that works for them. As per an UpGuard report, on DevOps Stats for Doubters, 63% organizations that implemented DevOps experienced improvement in the quality of their software deployments. Delivering business value from data is contingent on the developers’ ability to innovate through methods like DevOps. It is about deploying the right foundation for modern application development across both public and private clouds. The current environment is uncharted territory for many enterprises. 


Breaking Down Serverless Anti-Patterns

The goal of building with serverless is to dissect the business logic in a manner that results in independent and highly decoupled functions. This, however, is easier said than done, and often developers may run into scenarios where libraries or business logic or, or even just basic code has to be shared between functions. Thus leading to a form of dependency and coupling that works against the serverless architecture. Functions depending on one another with a shared code base and logic leads to an array of problems. The most prominent is that it hampers scalability. As your systems scale and functions are constantly reliant on one another, there is an increased risk of errors, downtime, and latency. The entire premise of microservices was to avoid these issues. Additionally, one of the selling points of serverless is its scalability. By coupling functions together via shared logic and codebase, the system is detrimental not only in terms of microservices but also according to the core value of serverless scalability. This can be visualized in the image below, as a change in the data logic of function A will lead to necessary changes in how data is communicated and processed in function B. Even function C may be affected depending on the exact use case.


Why Service Meshes Are Security Tools

Modern engineering organizations need to give individual developers the freedom to choose what components they use in applications as well as how to manage their own workflows. At the same time, enterprises need to ensure that there are consistent ways to manage how all of the parts of an application communicate inside the app as well as with external dependencies. A service mesh provides a uniform interface between services. Because it’s attached as a sidecar acting as a micro-dataplane for every component within the service mesh, it can add encryption and access controls to communication to and from services, even if neither are natively supported by that service. Just as importantly, the service mesh can be configured and controlled centrally. Individual developers don’t have to set up encryption or configure access controls; security teams can establish organization-wide security policies and enforce them automatically with the service mesh. Developers get to use whatever components they need and aren’t slowed down by security considerations. Security teams can make sure encryption and access controls are configured appropriately, without depending on developers at all. 


Review: AWS Bottlerocket vs. Google Container-Optimized OS

To isolate containers, Bottlerocket uses container control groups (cgroups) and kernel namespaces for isolation between containers running on the system. eBPF (enhanced Berkeley Packet Filter) is used to further isolate containers and to verify container code that requires low-level system access. The eBPF secure mode prohibits pointer arithmetic, traces I/O, and restricts the kernel functions the container has access to. The attack surface is reduced by running all services in containers. While a container might be compromised, it’s less likely the entire system will be breached, due to container isolation. Updates are automatically applied when running the Amazon-supplied edition of Bottlerocket via a Kubernetes operator that comes installed with the OS.  An immutable root filesystem, which creates a hash of the root filesystem blocks and relies on a verified boot path using dm-verity, ensures that the system binaries haven’t been tampered with. The configuration is stateless and /etc/ is mounted on a RAM disk. When running on AWS, configuration is accomplished with the API and these settings are persisted across reboots, as they come from file templates within the AWS infrastructure.


Microsoft tells Windows 10 users they can never uninstall Edge. Wait, what?

Microsoft explained it was migrating all Windows users from the old Edge to the new one. The update added: "The new version of Microsoft Edge gives users full control over importing personal data from the legacy version of Microsoft Edge." Hurrah, I hear you cry. That's surely holier than Google. Microsoft really cares. Yet next were these words: "The new version of Microsoft Edge is included in a Windows system update, so the option to uninstall it or use the legacy version of Microsoft Edge will no longer be available." Those prone to annoyance would cry: "What does it take not only to force a product onto a customer but then make sure that they can never get rid of that product, even if they want to? Even cable companies ultimately discovered that customers find ways out." Yet, as my colleague Ed Bott helpfully pointed out, there's a reason you can't uninstall Edge. Well, initially. It's the only way you can download the browser you actually want to use. You can, therefore, hide Edge -- it's not difficult -- but not completely eliminate it from your life. Actually that's not strictly true either. The tech world houses many large and twisted brains. They don't only work at Microsoft. Some immediately suggested methods to get your legacy Edge back on Windows 10. Here's one way to do it.


Digital public services: How to achieve fast transformation at scale

For most public services, digital reimagination can significantly enhance the user experience. Forms, for example, can require less data and pull information directly from government databases. Texts or push notifications can use simpler language. Users can upload documents as scans. In addition, agencies can link touchpoints within a single user journey and offer digital status notifications. Implementing all of these changes is no trivial matter and requires numerous actors to collaborate. Several public authorities are usually involved, each of which owns different touchpoints on the user journey. The number of actors increases exponentially when local governments are responsible for service delivery. Often, legal frameworks must be amended to permit digitization, meaning that the relevant regulator needs to be involved. Yet when governments use established waterfall approaches to project management (in which each step depends on the results of the previous step), digitization can take a long time and the results often fall short. In many cases, long and expensive projects have delivered solutions that users have failed to adopt.


State-backed hacking, cyber deterrence, and the need for international norms

The issue of how cyber attack attribution should be handled and confirmed also deserves to be addressed. Dr. Yannakogeorgos says that, while attribution of cyber attacks is definitely not as clear-cut as seeing smoke coming out of a gun in the real world, with the robust law enforcement, public private partnerships, cyber threat intelligence firms, and information sharing via ISACs, the US has come a long way in terms of not only figuring out who conducted criminal activity in cyberspace, but arresting global networks of cyber criminals as well. Granted, things get trickier when these actors are working for or on behalf of a nation-state. “If these activities are part of a covert operation, then by definition the government will have done all it can for its actions to be ‘plausibly deniable.’ This is true for activities outside of cyberspace as well. Nations can point fingers at each other, and present evidence. The accused can deny and say the accusations are based on fabrications,” he explained. “However, at least within the United States, we’ve developed a very robust analytic framework for attribution that can eliminate reasonable doubt amongst friends and allies, and can send a clear signal to planners on the opposing side...."


Tackling Bias and Explainability in Automated Machine Learning

At a minimum, users need to understand the risk of bias in their data set because much of the bias in model building can be human bias. That doesn't mean just throwing out variables, which, if done incorrectly, can lead to additional issues. Research in bias and explainability has grown in importance recently and tools are starting to reach the market to help. For instance, the AI Fairness 360 (AIF360) project, launched by IBM, provides open source bias mitigation algorithms developed by the research community. These include bias mitigation algorithms to help in the pre-processing, in-processing, and post-processing stages of machine learning. In other words, the algorithms operate over the data to identify and treat bias. Vendors, including SAS, DataRobot, and H20.ai, are providing features in their tools that help explain model output. One example is a bar chart that ranks a feature's impact. That makes it easier to tell what features are important in the model. Vendors such as H20.ai provide three kinds of output that help with explainability and bias. These include feature importance as well as Shapely partial dependence plots (e.g., how much a feature value contributed to the prediction) and disparate impact analysis. Disparate impact analysis quantitatively measures the adverse treatment of protected classes.


Chief Data Analytics Officers – The Key to Data-Driven Success?

Core to the role is the experience and desire to use data to solve real business problems. Combining an overarching view of the data across the organisation, with a well-articulated data strategy, the CDAO is uniquely placed to balance specific needs for data against wider corporate goals. They should be laser-focused on extracting value from the bank’s data assets and ‘connecting-the-dots’ for others. By seeing and effectively communicating the links between different data and understanding how it can be combined to deliver business benefit, the CDAO does what no other role can do: bring the right data from across the business, plus the expertise of data scientists, to bear on every opportunity. Balance is critical. Leveraging their understanding of analytics and data quality, the CDAO can bring confidence to business leaders afraid to engage with data. They understand governance, and so can police which data can be used for innovation and which is business critical and ‘untouchable.’ They can deploy and manage data scientists to ensure they are focused on real business issues not pet analytics projects. Innovation-focused CDAOs will actively look for ways to generate returns on data assets, and to partner with commercial units to create new revenue from data insights.


How the network can support zero trust

One broad principle of zero trust is least privilege, which is granting individuals access to just enough resources to carry out their jobs and nothing more. One way to accomplish this is network segmentation, which breaks the network into unconnected sections based on authentication, trust, user role, and topology. If implemented effectively, it can isolate a host on a segment and minimize its lateral or east–west communications, thereby limiting the "blast radius" of collateral damage if a host is compromised. Because hosts and applications can reach only the limited resources they are authorized to access, segmentation prevents attackers from gaining a foothold into the rest of the network. Entities are granted access and authorized to access resources based on context: who an individual is, what device is being used to access the network, where it is located, how it is communicating and why access is needed. There are other methods of enforcing segmentation. One of the oldest is physical separation in which physically separate networks with their own dedicated servers, cables and network devices are set up for different levels of security. While this is a tried-and-true method, it can be costly to build completely separate environments for each user's trust level and role.



Quote for the day:

"Gratitude is the place where all dreams come true. You have to get there before they do." -- Jim Carrey