Daily Tech Digest - November 30, 2020

Pairing AI With Human Judgment Is Key To Avoiding 'Mount Stupid'

We are in the midst of what has been called the age of automation — a transformation of our economy, as robots, algorithms, AI and machines become integrated into everything we do. It would be a mistake to assume automation corrects for the Dunning-Kruger effect. Like humans, dumb bots and even smart AI often do not understand the limitations of their own competency. Machines are just as likely to scale Mount Stupid, and it can just as likely lead to disastrous decisions. But there is a fix for that. For humans, the fix is adding some humility to our decision making. For machines, it means creating flexible systems that are designed to make allowances and seamlessly handle outlier events — the unknowns. Having humans integrated into that system allows one to identify those potential automation failures. In automation, this is sometimes referred to as a human-in-the-loop system. Much like an autonomous vehicle, these systems keep improving as they acquire more input. It’s not rigid; if the autonomous vehicle encounters a piece of furniture in the road, a remote driver can step in to navigate around it in real-time while the AI or automation system learns from the actions taken by the remote driver. Human-in-the-loop systems are flexible and can seamlessly handle outlier events.

UK government ramps up efforts to regulate tech giants

Digital Secretary Oliver Dowden said: “There is growing consensus in the UK and abroad that the concentration of power among a small number of tech companies is curtailing growth of the sector, reducing innovation and having negative impacts on the people and businesses that rely on them. It’s time to address that and unleash a new age of tech growth.” While the Furman report found that there have been a number of efforts between the tech giants to support interoperability, giving consumers greater freedom and flexibility, these can be hampered by technical challenges and a lack of coordination. The report's authors wrote that, in some cases, lack of interoperability are due to misaligned incentives. “Email standards emerged due to co-operation but phone number portability only came about when it was required by regulators. Private efforts by digital platforms will be similarly hampered by misaligned incentives. Open Banking provides an instructive example of how policy intervention can overcome technical and coordination challenges and misaligned incentives.” In July, when the DMT was setup, law firm Osborne Clarke warn about the disruption to businesses increased regulations could bring.

Consumption of public cloud is way ahead of the ability to secure it

As the shift to working from home at the start of the year began, the old reliance on the VPN showed itself to be a potential bottleneck to employees being able to do what they are paid for. "I think the new mechanism that we've been sitting on -- everyone's been doing for 20 years around VPN as a way of segmentation -- and then the zero trust access model is relatively new, I think that mechanism is really intriguing because [it] is so extensible to so many different problems in use cases that VPN's didn't solve, and then other use cases that people didn't even consider because there was no mechanism to do it," Jefferson said. Going a step further, Eren thinks VPN usage between client and sites is on life support, but VPNs themselves are not going away. ... According to Jefferson, the new best practice is to push security controls as far out to the edge as possible, which undermines the role of traditional appliances like firewalls to be able to enforce security, and people are having to work out the best place for their controls in the new working environment. "I used to be pretty comfortable. This guy, he had 10,000 lines of code written on my Palo Alto or Cisco and every time we did a firewall refresh every 10 years, we had to worry about the 47,000 ACLs [access control list] on the firewall, and now that gets highly distributed," he said.

IOT & Distributed Ledger Technology Is Solving Digital Economy Challenges

DLTs can play an important function in driving data provenance yet ought to be utilized in conjunction with technologies, for example, hardware root of trust and immutable storage. Distributed ledger technology just keeps up a record of the transactions themselves, so if you have poor or fake information, it will simply disclose to you where that terrible information has been. All in all, DLTs alone don’t address software engineering’s trash in, trash out issue, yet offer impressive advantages when utilized in concert with technologies that ensure data integrity. Blockchain innovation vows to be the missing connection empowering peer-to-peer contractual behavior with no third party to “certify” the IoT transaction. It answers the challenge of scalability, single purpose of disappointment, time stamping, record, security, trust and reliability in a steady way. Blockchain innovation could give a basic infrastructure to two devices to straightforwardly move a piece of property, for example, cash or information between each other with a secured and reliable time-stamped contractual handshake. To empower message exchanges, IoT devices will use smart contracts which at that point model the understanding between the two gatherings.

Does small data provide sharper insights than big data?

Data Imbalance occurs when the number of data points for different classes is uneven. Imbalance in most machine learning models is not a problem, but imbalance is consequential in Small Data. One technique is to change the Loss Function by adjusting weights, another example of how AI models are not perfect. A very readable explanation of imbalance and its remedies can be found here.  Difficulty in Optimization is a fundamental problem since that is what machine learning is meant to do. Optimization starts with defining some kind of loss function/cost function. It ends with minimizing it using one or the other optimization routines, usually Gradient Descent, an iterative algorithm for finding a local minimum of a differentiable function (first-semester calculus, not magic). But if the dataset is weak, the technique may not optimize. The most popular remedy is Transfer Learning. As the name implies, transfer learning is a machine learning method where a model is reused to enhance another model. A simple explanation of transfer learning can be found here. I wanted to do #3 first, because #2 is the more compelling discussion about small data.

84% of global decision makers accelerating digital transformation plans

“New ways of working, initially broadly imposed by the global pandemic, are morphing into lasting models for the future,” said Mickey North Rizza, program vice president for IDC‘s Enterprise Applications and Digital Commerce research practice. “Permanent technology changes, underpinned by improved collaboration, include supporting hybrid work, accelerating cloud use, increasing automation, going contactless, adopting smaller TaskApps, and extending the partnership ecosystem. Enterprise application vendors need to assess their immediate and long-term strategies for delivering collaboration platforms in conjunction with their core software.” “If we’ve learned anything this year, it’s that the business environment can change almost overnight, and as business leaders we have to be able to reimagine our organizations and seize opportunities to secure sustainable competitive advantage,” said Mike Ettling, CEO, Unit4. “Our study shows what is possible with continued investment in innovation and a people-first, flexible enterprise applications strategy. As many countries go back into some form of lockdown, this people-centric focus is crucial if businesses are to survive the challenges of the coming months.”

How Apache Pulsar is Helping Iterable Scale its Customer Engagement Platform

Pulsar’s top layer consists of brokers, which accept messages from producers and send them to consumers, but do not store data. A single broker handles each topic partition, but the brokers can easily exchange topic ownership, as they do not store topic states. This makes it easy to add brokers to increase throughput and immediately take advantage of new brokers. This also enables Pulsar to handle broker failures. ... One of the most important functions of Iterable’s platform is to schedule and send marketing emails on behalf of Iterable’s customers. To do this, we publish messages to customer-specific queues, then have another service that handles the final rendering and sending of the message. These queues were the first thing we decided to migrate from RabbitMQ to Pulsar. We chose marketing message sends as our first Pulsar use case for two reasons. First, because sending incorporated some of our more complex RabbitMQ use cases. And second, because it represented a very large portion of our RabbitMQ usage. This was not the lowest risk use case; however, after extensive performance and scalability testing, we felt it was where Pulsar could add the most value.

Algorithmic transparency obligations needed in public sector

The review notes that bias can enter algorithmic decision-making systems in a number of ways. These include historical bias, in which data reflecting previously biased human decision-making or historical social inequalities is used to build the model; data selection bias, in which the data collection methods used mean it is not representative; and algorithmic design bias, in which the design of the algorithm itself leads to an introduction of bias. Bias can also enter the algorithmic decision-making process because of human error as, depending on how humans interpret or use the outputs of an algorithm, there is a risk of bias re-entering the process as they apply their own conscious or unconscious biases to the final decision. “There is also risk that bias can be amplified over time by feedback loops, as models are incrementally retrained on new data generated, either fully or partly, via use of earlier versions of the model in decision-making,” says the review. “For example, if a model predicting crime rates based on historical arrest data is used to prioritise police resources, then arrests in high-risk areas could increase further, reinforcing the imbalance.”

Regulation on data governance

The data governance regulation will ensure access to more data for the EU economy and society and provide for more control for citizens and companies over the data they generate. This will strengthen Europe’s digital sovereignty in the area of data. It will be easier for Europeans to allow the use of data related to them for the benefit of society, while ensuring full protection of their personal data. For example, people with rare or chronic diseases may want to give permission for their data to be used in order to improve treatments for those diseases. Through personal data spaces, which are novel personal information management tools and services, Europeans will gain more control over their data and decide on a detailed level who will get access to their data and for what purpose. Businesses, both small and large, will benefit from new business opportunities as well as from a reduction in costs for acquiring, integrating and processing data, from lower barriers to enter markets, and from a reduction in time-to-market for novel products and services. ... Member States will need to be technically equipped to ensure that privacy and confidentiality are fully respected. 

Why Vulnerable Code Is Shipped Knowingly

Even with a robust application security program, organizations will still deploy vulnerable code! The difference is that they will do so with a thorough and contextual understanding of the risks they're taking rather than allowing developers or engineering managers — who lack security expertise — to make that decision. Application security requires a constant triage of potential risks, involving prioritization decisions that allow development teams to mitigate risk while still meeting key deadlines for delivery. As application security has matured, no single testing technique has helped development teams mitigate all security risk. Teams typically employ multiple tools, often from multiple vendors, at various points in the SDLC. Usage varies, as do the tools that organizations deem most important, but most organizations end up utilizing a set of tools to satisfy their security needs. Lastly, while most organizations provide developers with some level of security training, more than 50% only do so annually or less often. This is simply not frequent or thorough enough to develop secure coding habits. While development managers are often responsible for this training, in many organizations, application security analysts carry the burden of performing remedial training for development teams or individual developers who have a track record of introducing too many security issues.

Quote for the day:

"Most people live with pleasant illusions, but leaders must deal with hard realities." - Orrin Woodward

Daily Tech Digest - November 29, 2020

Microservice Architecture: Why Containers And IoT Are A Perfect Fit

Apart from technical considerations, the way a software development team is set up also plays a critical role in the software technology decision. A major advantage of containerization is the great flexibility and manageability of the overall development process. Although previous monolithic software development often had cumbersome documentation requirements, hard-to-predict timelines and complicated synchronization processes, the container-based approach can deliver a different experience. If you're able to split up the project in isolated containers, you can divide the team into smaller groups with faster iterations and address additional feature requests more easily to cater to modern agile processes. Containerization also bridges the two worlds of cloud and embedded development by aligning the underlying technology, unifying the development workflow and leveraging automation capabilities containers provide. With that, it becomes much easier to support hybrid workflows and reuse the same software. This is important for IoT projects if customers have vastly different network environments, data ownership requirements and solution approaches.

Automation with intelligence

For organisations to successfully integrate intelligent automation, they must first acknowledge that transformation is necessary. It starts with making a conscious choice about what they want to achieve, based on the ‘art of the possible’. This decision is then fed into a robust and realistic intelligent automation strategy. That is the ideal, but here is the reality: Only 26 per cent of Deloitte’s survey respondents that are piloting automations – and 38 per cent of those implementing and scaling –have an enterprise-wide intelligent automation strategy. There is a clear difference between organisations piloting automations and those implementing and scaling their efforts. The latter are more likely to reimagine what they do and incorporate process change across functional boundaries. Those in the piloting stage are more likely to automate current processes, with limited change – they may have not yet taken advantage of the many technologies and techniques that can expand their field of vision and open up even more opportunities. There are other barriers to success: process fragmentation and a lack of IT readiness were ranked by survey respondents at the top of the list (consistent with responses in the past two years).

XDR: Unifying incident detection, response and remediation

The primary driver behind XDR is its fusing of analytics with detection and response. The premise is that these functions are not and should not be separate. By bringing them together, XDR promises to deliver many benefits. The first is a precise response to threats. Instead of keeping logs in a separate silo, with XDR they can be used to immediately drive response actions with higher fidelity and greater depth knowledge into the details surrounding an incident. For example, the traditional SIEM approach is based on monitoring network log data for threats and responding on the network. Unless a threat is simple, like commodity malware that can be easily cleaned up, remediation is typically delayed until a manual investigation is performed. XDR, on the other hand, provides SOCs both the visibility and ability to not just respond but also remediate. SOC operators can take precise rather than broad actions, and not just across the network, but also the endpoint and other areas. Because XDR seeks to fuse the analysis, control and response planes, it provides a unified view of threats. Instead of forcing SOCs to use multiple interfaces to threat hunt and investigate, event data and analytics are brought together in XDR to provide the full context needed to precisely respond to an incident.

The algorithms are watching us, but who is watching the algorithms?

Improving the process of data-based decisions in the public sector should be seen as a priority, according to the CDEI. "Democratically-elected governments bear special duties of accountability to citizens," reads the report. "We expect the public sector to be able to justify and evidence its decisions."  The stakes are high: earning the public's trust will be key to the successful deployment of AI. Yet the CDEI's report showed that up to 60% of citizens currently oppose the use of AI-infused decision-making in the criminal justice system. The vast majority of respondents (83%) are not even certain how such systems are used in the police forces in the first place, highlighting a gap in transparency that needs to be plugged. There is a lot that can be gained from AI systems if they are deployed appropriately. In fact, argued the CDEI's researchers, algorithms could be key to identifying historical human biases – and making sure they are removed from future decision-making tools. "Despite concerns about 'black box' algorithms, in some ways algorithms can be more transparent than human decisions," said the researchers. "Unlike a human, it is possible to reliably test how an algorithm responds to changes in parts of the input.

Microsoft patents tech to score meetings using body language, facial expressions, other data

Microsoft is facing criticism for its new “Productivity Score” technology, which can measure how much individual workers use email, chat and other digital tools. But it turns out the company has even bigger ideas for using technology to monitor workers in the interest of maximizing organizational productivity. Newly surfaced Microsoft patent filings describe a system for deriving and predicting “overall quality scores” for meetings using data such as body language, facial expressions, room temperature, time of day, and number of people in the meeting. The system uses cameras, sensors, and software tools to determine, for example, “how much a participant contributes to a meeting vs performing other tasks (e.g., texting, checking email, browsing the Internet).” The “meeting insight computing system” would then predict the likelihood that a group will hold a high-quality meeting. ... Microsoft says the goal is to help organizations ensure that their workers are taking advantage of tools like shared workspaces and cloud-based file sharing to work most efficiently. This also works to Microsoft’s advantage by encouraging the use of its products such as Teams and SharePoint inside companies, making future Microsoft 365 renewals more likely.

A family of computer scientists developed a blueprint for machine consciousness

Defining consciousness is only half the battle – and one that likely won’t be won until after we’ve aped it. The other side of of the equation is observing and measuring consciousness. We can watch a puppy react to stimulus. Even plant consciousness can be observed. But for a machine to demonstrate consciousness its observers have to be certain it isn’t merely imitating consciousness through clever mimicry. Let’s not forget that GPT-3 can blow even the most cynical of minds with its uncanny ability to seem cogent, coherent, and poignant. The Blums get around this problem by designing a system that’s only meant to demonstrate consciousness. It won’t try to act human or convince you it’s thinking. This isn’t an art project. Instead, it works a bit like a digital hourglass where each grain of sand is information. The machine sends and receives information in the form of “chunks” that contain simple pieces of information. There can be multiple chunks of information competing for mental bandwidth, but only one chunk of information is processed at a time. And, perhaps most importantly, there’s a delay in sending the next chunk. This allows chunks to compete – with the loudest, most important one often winning.

AI in Practice, With and Without Data

Data-based methods work well for situations where new data observed do not deviate too much from old data learned. In particular, data-intensive methods showed astonishing results in the domains of image, speech, and language understanding, and also in gaming. In fact, they are the quintessence implementation of what Economy Nobel Prize Daniel Kahneman refers to as System-1 in his theory about the mind. Based on this theory, the mind is composed of two systems: System-1 governs our perception and classification, and System-2 governs our reasoning and planning. ... To quote Daphne Koller, “the world is noisy and messy” and we need to deal with noise and uncertainty, even when data is available in quantity. Here, we enter the domain of probability theory and the best set of methods to consider is probabilistic graphical models where you model the subject under consideration. There are three kinds of probabilistic graphical models, from the least sophisticated to the most sophisticated: Bayesian networks, Markov networks, and hybrid networks. In these methods, you create a model that captures all the relevant general knowledge about the subject in quantitative, probabilistic terms, such as the cause-effect network of a troubleshooting application.

Organizations with a culture of innovation fuelling business resilience

The study introduced the culture of innovation framework, which spans the dimensions of people, process, data, and technology, to assess organizations’ approach to innovation. It surveyed 439 business decision makers and 438 workers in India within a 6-month period, before and since COVID-19. The India study was part of a broader survey among 3,312 business decision makers and 3,495 workers across 15 markets in Asia Pacific region. Through the research, organizations’ maturity was mapped and as a result, organizations were grouped in four stages – traditionalist (stage 1), novice (stage 2), adaptor (stage 3) and leaders (stage 4). Leaders comprise of organizations that are the most mature in building a culture of innovation1. “Innovation is no longer an option, but a necessity. We have seen how the recent crisis has spurred the need for transformation; for organizations to adapt and innovate in order to emerge stronger,” said Rajiv Sodhi, COO, Microsoft India. “We commissioned this research to gain better understanding of the relationship between having a culture of innovation and an organization’s growth. But now, more than achieving growth, we see that having a mature culture of innovation translates to resilience, and strength to withstand economic crises to recover,” he added.

Cyber Resilience During Times of Uncertainty

Current cybersecurity strategies tend to center around stopping potential threats from getting into your computing and communications infrastructure at all. To be successful, it requires that no employee ever click on a bad link, download the wrong file or work from an unsecured Wi-Fi network. However, this approach is not realistic nor sufficient enough in today’s world, and impossible in our collective future. That is why business leaders need to rethink their cyber strategy to adapt to our constantly changing world. In practice, the concept of cyber resilience is based on a bend-but-not-break philosophy. It understands that despite significant defensive investments and best efforts, cyber-criminals will occasionally get in. The cyber resilience approach is based on the premise that if you organize your defenses to prioritize resiliency over just computer security, you keep what’s most important going – your business. No matter what your business might be – whether it is churning out widgets or keeping the lights on – what’s key is to keep your most valuable assets unaffected and operational. Implementing this new goal, from the boardroom down, helps save money and improve results.

Governance Through Code for Microservices at Scale

Don’t go too far and take all decision making away from the development squads. It’s natural to think that the more we implement and the fewer the choices developers have to make the better. However, I’ve found that there is such a thing as going too far. On one of my projects the architecture team was making all of the decisions and creating frameworks/tools to govern through code. I still recall vividly one of the developers coming to me and saying “If you architects want to make all of the decisions and tell us how to implement things, then put your cell phone number in Pager Duty! If you want me to be accountable and be woken up at 3 AM when my code breaks in production, then I am going to make the decisions.” A decentralized governance approach was necessary and the role of the architect needed to be as a boundary setter. While the Product Build Squad is designing and building the Microservices Framework and the Developer Onboarding Tool (using an Inner Source approach with contributions from other developers), development squads are already using the framework and tool. Depending on how many development squads your project/program has, you could have many Microservices with say Version 1.0 of the Microservices Framework. 

Quote for the day:

"Success is often the result of taking a misstep in the right direction." - Al Bernstein

Daily Tech Digest - November 28, 2020

Top Digital Banking Transformation Trends for 2021

Despite the pandemic being an everyday reality for every person and business, organizations can not use this as an excuse to not push forward even further. New initiatives must be prioritized that can leverage data and analytics to improve the customer experience. Digital engagement must be improved, focusing on simplicity and speed far beyond what is currently provided. And, financial institutions must optimize the use of technology that is already in place. Every bank and credit union must embrace the digital banking transformation trends that are forthcoming in 2021, doubling down on the commitment to improving digital customer experiences as well as the internal processes, infrastructure, products and personnel that will provide the foundation for future competitiveness. ... At a time when most financial institutions are concerned about the prospects of loan losses and the shrinking of revenues in a post-pandemic economy, the focus on automation and robotics seems natural. Robotic Process Automation (RPA) can increase efficiency by providing a cost-effective substitute to human resources both in-house and outsourced. While still in its infancy, RPA provides the benefits of cost reduction. increased efficiency, enhanced accuracy, improved customer experiences, and seamless flexibility.

Digital transformation: 3 things your organization can't afford to overlook

Legal costs are soaring for companies, even if they have their own internal counsel. An eDiscovery system, whether on premises or in the cloud, is going to need access to internal company documents, some of them many years old. eDiscovery systems are designed to code and reference documents for legal use. These legal search criteria will differ from the user-specified search criteria that finance, purchasing, and other business departments initially load into your document management system for their own search purposes. Being able to easily upload documents into an eDiscovery engine when you need to can potentially save your company hundreds of thousands of dollars in document legal discovery that would otherwise be manual and costly. Minimally, you should ensure that the document management system you use can easily interface with eDiscovery systems. One area in which paper documents have been getting digitized is medical records. This happens in a desire to rid offices of paper, but also when company acquisitions are made and disparate medical systems (and their documentation) need to be blended.

Willyama's role in helping Indigenous Australians secure a career in cybersecurity

Beyond helping students, Willyama recently stepped up to open an Indigenous Business Precinct in Canberra. It forms part of a wider network of Indigenous precincts that have also opened in Melbourne and Brisbane, which are supported by other established Indigenous organisations. Hynes described the precinct as a place to "provide culturally appropriate and professional office space" for other Indigenous-owned businesses to "grow in a supportive environment and have full access to professional services, meeting rooms, teleconference facilities, and NBN that they may not have access to when trying to start a business or take the next step". The company has also been working for the last two years with Samsung and SupplyAus, another Indigenous-owned company, to integrate a Samsung developed heart monitoring system into the 190 Indigenous health centres across Australia. The motivation for Hynes to start these initiatives come down to his personal experience. "I've had siblings who have been in out of jail, other siblings that were adopted as kids who have been in jail, a whole lot of problematic issues, so there was an opportunity to see if we could make a difference and provide more career opportunities for Aboriginal people and also [army] veterans where we could," he said.

Four opportunities for tech startups in 2021

Edge computing, which is the concept of processing and analyzing data in servers closer to the applications they serve, is growing in popularity and opening new markets for established telecom providers, semiconductor and IoT start-ups, and new software ecosystems. With 5G networking technology continuing to roll out in 2021, the power of edge computing is set to see another upgrade, enabling new real-time applications in video processing and analytics, self-driving cars, robotics, and AI, among many others. “As data becomes increasingly more valuable to overall business success and decision making, businesses are looking to gain a legitimate and measurable advantage over their competition by moving data processing closer to the edge,” VoltDB’s CPO, Dheeraj Remella, told TechHQ previously. “Hundreds of thousands of IoT devices will be located throughout warehouses, trucking fleets, production plants, and more, to generate, capture and act on data. The ability to process, analyze and act on this information in real-time has the potential to transform these industries overnight,” said Remella. The vast IT ecosystem created by edge computing and demand from enterprise across industries will provide ample room for startups to emerge.

How to mitigate risks in an interconnected intelligent enterprise

Traditionally, audit teams responsible for ensuring regulatory standards perform manual checks to ensure compliance. Today, different business lines such as HR often use SaaS applications like SuccessFactors and Workday, complicating manual processes as audit teams struggle to find one source of truth as each application is often connected. These manual audits can take countless hours between screenshots and Excel spreadsheets, cost hundreds of thousands of dollars, and only show the results of a “point-in-time” check. Automation is the key to simplifying and streamlining these cumbersome tasks. A good solution can intelligently analyze connections between applications to get the full sense of where compliance errors originate and how to fix them and push organizations to reach a level of “continuous compliance,” allowing them to streamline the most critical controls across business applications to save time and money, while capturing the evidence of different compliance regulations auditors’ mandate. SaaS and cloud applications have become key factors in digital transformation and enable employees to become more efficient regardless of their working location. However, these same applications open up critical compliance and security risks that could put a company in the headlines and face significant fines if not addressed correctly.

Creating an Analytics Culture, Part III: Sideways-In Traits

A major responsibility of the Analytics Council is to manage risk and avoid bad things from happening with data. This includes breaches of data security and privacy; lack of data quality and consistency that erodes business users’ trust in data; and lack of standards for key metrics and other data elements. To do this, organizations with a strong analytics culture create an enterprise data governance program to govern important data assets in operational and analytical systems throughout the organization. A data governance program defines standards for key data elements. It creates precise, unambiguous definitions of these data elements and documents them in a business glossary. It also defines policies and procedures to create and update those definitions, track data quality, detect problems, and resolve conflicts. The program also assigns owners and stewards to critical data elements to ensure the accuracy and security of data elements under their purview and escalate problems to the appropriate committees for review. Data governance is not sexy—it’s arduous, time-consuming committee work. But organizations with a strong analytics culture embrace these tasks with gusto, knowing the downsides if they don’t.

Government Watchdog Calls for 5G Cybersecurity Standards

The government watchdog met with officials from selected federal agencies and companies involved with the development, deployment, or impacts of 5G networks. The agency states, "We also met with the four largest U.S. wireless carriers (AT&T Inc., Sprint Corporation, T-Mobile US, Inc., and Verizon Communications Inc.), industry organizations, standards bodies, and policy organizations." In addition, the officials at the agency met with representatives of four university wireless research programs and toured one of them. During the interviews with officials and representatives, GAO officials discussed 5G performance goals; 5G applications; the status of key technologies that will enable the performance or usage of 5G networks; challenges to the performance or usage of 5G in the U.S.; and policy options to address these challenges, according to the report. "5G potentially introduces new modes of cyberattack and an expanded number of points of attack and it will likely exacerbate privacy concerns due to the increased precision of location data and the proliferation of IoT devices," the GAO notes. GAO also cited a report by the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency

Real-Time Analytics with Apache Kafka

Apache Kafka takes a different approach to message streaming compared to most pub/sub systems. It uses a log file abstraction to persist data and track consumption. This makes Kafka simpler than some other messaging systems and therefore easier to scale, at least in one location. With Kafka, administrators configure a Time To Live (TTL) that dictates how long the event brokers retain messages. The TTL can be minutes, hours, days, weeks, or longer. Consumer applications do not need to receive messages immediately and can even replay entire message streams long after the broker received them. This is difficult if not impossible in many other messaging systems, making Kafka preferable whenever stream replay is required. Apache Pulsar is similar to Kafka. It is also open source and provides many of the same capabilities. Pulsar has advantages over Kafka, which include built-in support for multi-tenancy and data replication. As a result, companies that need message distribution on a global scale will find Pulsar less complex to deploy and manage than Kafka. Nevertheless, Kafka’s open source community is larger than Pulsar’s, and this could be considered an advantage of Kafka.

Avoiding the ‘Technical Debt’ Trap

The pandemic exposes another type of debt that I’ve observed as manufacturers respond to similar disruptions and other unexpected market shifts. It’s called “technical debt.” Programmer Ward Cunningham coined the term in the early 1990s to describe a scenario in which people deploy software using the fastest, easiest code available with little regard for future technology needs. This can occur when manufacturers quickly launch an IoT program without considering their future scalability needs. They may bring in coding experts who can get a system up and running fast but lack foresight into operational demands that could require future system changes. ... The need for standardized data modeling has become more urgent in recent years as manufacturers move toward cloud-based analytics. At HighByte, we call this an “object-oriented approach.” It involves the use of common models to integrate and manage information coming from multiple sources without the need for custom scripts. Over the years, you’ve likely accumulated different equipment models from various vendors, possibly located in different facilities. To fully leverage the value of IoT, you need the ability to view and analyze information from these disparate systems without having to rewrite code for each system or subsystem. 

Operational maturity: what it is, why you need it, how to get it

A lack of operational maturity often reveals itself in the cracks in performance as a business starts to grow. It might be apparent in frequent outages, lengthy release cycles or laborious, error-prone processes. When you dig deeper, you may find that database performance isn’t scalable enough, disaster recovery isn’t fast enough, application security isn’t strong enough. All these things point to the fact that your operational maturity isn’t scaling in line with business needs. When tell-tale performance issues arise, it’s definitely time to reprioritise operability. But deciding the best way to go about it isn’t easy. Clearly, if you put all your time and energy into innovation with little emphasis on operability, problems are going to escalate. But putting the brakes on innovation for six months while you get operations in-hand is risky too. There’s a good chance that competitors will get ahead of you in the innovation stakes, potentially luring away your customers. On the other hand, if you fail to build operational maturity into your plan, there may come a time when a major incident forces you to put everything else on hold while it’s sorted out. And remember, getting your customers back and regaining their trust after a major incident is far from certain.

Quote for the day:

In matters of style, swim with the current, in matters of principle, stand like a rock. - Thomas Jefferson

Daily Tech Digest - November 27, 2020

Algorithmic transparency obligations needed in public sector

“The Centre for Data Ethics and Innovation has today set out a range of measures to help the UK achieve this, with a focus on enhancing transparency and accountability in decision-making processes that have a significant impact on individuals. Not only does the report propose a roadmap to tackle the risks, but it highlights the opportunity that good use of data presents to address historical unfairness and avoid new biases in key areas of life.” ... These include historical bias, in which data reflecting previously biased human decision-making or historical social inequalities is used to build the model; data selection bias, in which the data collection methods used mean it is not representative; and algorithmic design bias, in which the design of the algorithm itself leads to an introduction of bias. Bias can also enter the algorithmic decision-making process because of human error as, depending on how humans interpret or use the outputs of an algorithm, there is a risk of bias re-entering the process as they apply their own conscious or unconscious biases to the final decision. “There is also risk that bias can be amplified over time by feedback loops, as models are incrementally retrained on new data generated, either fully or partly, via use of earlier versions of the model in decision-making,” says the review.

Kick-off Your Transformation by Imagining It Had Failed

An effective way to get the team into the right mindset – i.e. to think as if they’re not merely looking into the future but are actually in it – is to have everyone participate in telling the story of the transformation. What did happen? What noteworthy events took place? What were the highs and lows of the transformation? The power of this exercise is that it challenges the team to ‘go deep’ into this prospective hindsight narrative, constructing a plausible chain of events that must logically lead to the outcome (the failure of the transformation). This opens a broader spectrum of potential reasons for failure, further enriching the conversation and providing us with a goldmine of potential insights.  One way to collaboratively write the story of the transformation is by constructing a timeline. Ask the team to break into small groups of 3 or 4. Each small group will work to chronologically list significant events that they believe led to the failure of the transformation. The timeline should be divided into meaningful time periods (taking into account how long back we are looking) – e.g. quarterly.

Have attitudes to tech investment changed at board level due to Covid-19?

“There has been a great acceleration of thinking around technology and its role in business,” says Chapman. “Technology has been a crucial lifeline through this pandemic, essential to maintaining business efficiency and productivity.” The adoption of collaborative tools and the use of the cloud has allowed workforces to continue working effectively in remote environments, as much of the world went into lockdown. Chapman explains that this has helped change the attitude of boards of directors towards technology, for two reasons. First, technology has proved itself by successfully enabling entire workforces to work from home. “If this pandemic had happened even five years ago, IT teams would not have succeeded, but they did in 2020 because the technology was ready, and so was the appetite from users to adopt it,” he says. The second reason is that the pandemic demonstrated clearly what was possible – digital modernisation has accelerated across nearly every industry. According to a recent IFS study, 70% of businesses increased or maintained digital transformation spend during the pandemic. The survey data indicated that enterprise plans to increase spending on digital transformation tracks closely with concerns about economic conditions disrupting business.

Bandook: Signed & Delivered

Check Point Research recently observed a new wave of campaigns against various targets worldwide that utilizes a strain of a 13-year old backdoor Trojan named Bandook. Bandook, which had almost disappeared from the threat landscape, was featured in 2015 and 2017 campaigns, dubbed “Operation Manul” and “Dark Caracal“, respectively. These campaigns were presumed to be carried out by the Kazakh and the Lebanese governments, as uncovered by the Electronic Frontier Foundation (EFF) and Lookout. During this past year, dozens of digitally signed variants of this once commodity malware started to reappear in the threat landscape, reigniting interest in this old malware family. In the latest wave of attacks, we once again identified an unusually large variety of targeted sectors and locations. ... The full infection chain of the attack can be broken down into three main stages. The first stage starts, as in many other infection chains, with a malicious Microsoft Word document delivered inside a ZIP file. Once the document is opened, malicious macros are downloaded using the external template feature. The macros’ code in turn drops and executes the second stage of the attack, a PowerShell script encrypted inside the original Word document. Finally, the PowerShell script downloads and executes the last stage of the infection: the Bandook backdoor.

Cybersecurity Predictions for 2021: Robot Overlords No, Connected Car Hacks Yes

One of the reasons we’ll see more internal attacks is that password-management tools and multi-factor authentication (MFA) will become more prevalent. This will help slow the rate of account-compromise attacks through phishing and data theft. These tools are very effective at reducing the threat from compromised accounts, with token-based MFA being the more effective of the two, but usage has grown slowly over the years. However, inexpensive physical tokens and software-based equivalents make them accessible. User acceptance will still be a challenge going into the new year and, probably, for several years more. We’re also likely to see a growth in risk-based access control technologies, where security analytics tools are used to help decide what level of authentication is appropriate on a case-by-case bases. This will reduce the burden on users by only requiring additional authentication when needed, while making it more difficult for attackers by tying behavior analysis techniques into the security stack. This also ties into zero-trust architectures, which should also see growth moving into 2021 and beyond. Security analytics as a technology will see more use, being incorporated into existing security stacks by seamlessly merging into existing solutions.

Enterprises addressing data security and e-waste issues generated by remote work

“The flood of technology investment which followed the beginning of the pandemic has created clear issues for both e-waste and secure data management,” said Alan Bentley, President of Global Strategy at Blancco. “The switch to remote work spurred on a wave of new device purchases, but these new, widely distributed devices have left enterprises feeling vulnerable. It’s fascinating that so many businesses have implemented roles to manage the e-waste issue resulting from COVID-19, demonstrating corporate social responsibility (CSR), but also their concern around how these devices will be dealt with when they reach end-of-life. “It’s crucial that this issue is not overlooked and that these devices are appropriately disposed of. But it’s just as crucial to ensure the safeguarding of sensitive data during that process. “Appropriate data sanitization might at times be overlooked as an element of e-waste policies, but it is the perfect opportunity to engage data management best practices. Because not only will this reduce environmental impact, it will also remove the risk of a data breach when disposing of devices at end-of-life.” The report concludes that enterprises must rethink their device management practices.

Fix bottlenecks before tackling business process automation

Describing the approach the company took to optimise the process, Novais says: “We started with pen and paper to define the process, then modelled it using Tibco, to identify gaps in how it was working and to describe what we wanted to achieve.” The overall objective of the employee onboarding process was to ensure new employees get all the applications they need for their job at Cosentino. Putting in place new and improved business processes is most successful if someone from the business can champion the change. Novais adds: “It is not easy to show someone they are not efficient.” Cosentino identified key users who could help others to understand how the business process improves the way they work. Novais says dashboards are used to help the company assess business processes to understand bottlenecks. “We can review processes on a regular basis,” he adds. The company has a cloud strategy based on Microsoft Azure and the Tibco cloud and is actively building applications that extend its legacy SAP enterprise resource planning (ERP) system. For instance, Novais says Cosentino is extracting data from the ERP for a new purchase-to-pay business process that is being run outside the ERP.

Use social design to help your distributed team self-organize

An alternative to the top-down approach is to let function drive form, supporting those most directly connected to creating value for customers. Think of it as bottom-up or outside-in. One discipline useful in such efforts is social design, a subspecialty of design that aspires to solve complex human issues by supporting, facilitating, and empowering cultures and communities. Its practitioners design systems, not simply beautiful things. I spoke with one of the pioneers in this area, Cheryl Heller, author of The Intergalactic Design Guide: Harnessing the Creative Potential of Social Design. Her current work at Arizona State University centers on integrating design thinking and practice into functions that don't typically utilize design principles. “People’s work is often their only source of stability right now,” she told me. “You have to be careful, because people are brittle.” Beware the fear-inducing “burning platform” metaphor frequently used in change management (the idea being, essentially, that people must be forced to overcome resistance to change). Heller explained that people using traditional business thinking are often in a hurry to “get to outcomes” and that haste is counterproductive when dealing with human relationships because it can lead to disengagement and ultimately failure.

Overcoming the pandemic era with a solid business continuity plan

IT leaders also felt that the pandemic had exposed their lack of preparedness for different working arrangements (28%). Nearly six months after the coronavirus upended our traditional working practices, businesses across the world are grappling to turn their temporary fixes into more sustainable processes that will support many employees that expect to work from home for the foreseeable future. This means employing the right technology solutions to ensure workers can be as productive and efficient at home as they are in the office. Whether that means migrating to the cloud to ensure easy access to tools and documents from remote locations, or implementing collaboration tools that enable quicker, easier, and simpler communication between employees but at the same time remaining secure. But it’s important that this challenge isn’t just viewed in the context of a technical fix. Businesses will also need to reassess processes and ensure employee benefits packages reflect a remote working structure. For example, this may involve providing the right physical set-up to ensure people can work comfortably, or launching wellness programmes to support the emotional and mental health of their employees.

Failing Toward Zero: Why Your Security Needs to Fail to Get Better

Cybercriminals need to succeed only once, but organizations need to succeed every time. While it's more than likely that your organization will be the target of a successful cyberattack, a successful cyberattack doesn't necessarily make a catastrophic data breach. If you know your security is going to fail at some point, you can prepare for this eventuality and mitigate its impact on operations. It's at this intersection of antifragility and cybersecurity that we get a model I'm calling "failing toward zero." Failing toward zero is a state in which each security incident leads to a successive reduction in future incidences of the same type. Organizations that fail toward zero embrace failure and learn from their mistakes. Our data suggests that smart companies are already starting to do this. The Data Science and Engineering team at Malwarebytes examined all detection data on business endpoints for the past three years. It's no surprise that malware detections on business endpoints went up every single year, from 7,553,354 in 2017 to around 49 million in 2020 — and the year isn't even over yet. However, the detections we're facing today are different from those we saw just a few years ago.

Quote for the day:

"If you want staff to give great service, give great service to staff." -- Ari Weinzweig

Daily Tech Digest - November 26, 2020

How to master microservices data architecture design

Optimizing microservices applications for data management takes the right combination of application design and database technology. It isn't a matter of simply choosing one database model over another, or placing a database in some external container. Instead, it comes down to staying focused on a set of database attributes known as ACID: atomicity, consistency, isolation and durability. Atomicity dictates that database operations should never be left partially complete: It either happens, or it doesn't. These operations shouldn't be parsed or broken out into smaller sets of independent tasks. Consistency means that the database never violates the rules that govern how it handles failures. For example, if a multistep change fails halfway through execution, the database must always roll back the operation completely to avoid retaining inaccurate data. Isolation is the principle that every single database transaction should operate without relying on or affecting the others. This allows the database to consistently accommodate multiple operations at once while still keeping its own failures contained. Durability is another word for a database's resilience. Architects should always plan for failure and disruptions by implementing the appropriate rollback mechanisms, remaining mindful of couplings, and regularly testing the database's response to certain failures.

Building a Self-Service Cloud Services Brokerage at Scale

The concepts and architecture behind a cloud brokerage are continually evolving. In a recent cloud brokerage survey on pulse.qa, an online community of IT executives, 29 % of the respondents answered that they outsourced the development of their cloud brokerage to a regional systems integrator (SI) or professional services firms. More interesting, is that 56% of the respondents built and launched their brokerage using a hybrid team of their own staff and expert outside contractors. When choosing a third-party SI or professional services firm, look for a provider with experience building brokerages for other customers like your organization. You should also investigate their strategic alliances with the CSPs and tools providers your organization requires in your brokerage. When it comes to expert outside contractors, the same rules apply. You might get lucky with finding such highly skilled contractors through contingent staffing firms – the so-called body shops – if you’re willing to go through enough resumes. However, when finding contractors for your cloud brokerage you’ll probably need to exercise your own team member’s professional networks to find the right caliber of cloud contractor.

The Future of Developer Careers

As a developer in a world with frequent deploys, the first few things I want to know about a production issue are: When did it start happening? Which build is, or was, live? Which code changes were new at that time? And is there anything special about the conditions under which my code is running? The ability to correlate some signal to a specific build or code release is table stakes for developers looking to grok production. Not coincidentally, “build ID” is precisely the sort of “unbounded source” of metadata that traditional monitoring tools warn against including. In metrics-based monitoring systems, doing so commits to an infinitely increasing set of metrics captured, negatively impacting the performance of that monitoring system AND with the added “benefit” of paying your monitoring vendor substantially more for it. Feature flags — and the combinatorial explosion of possible parameters when multiple live feature flags intersect — throw additional wrenches into answering Question 1. And yet, feature flags are here to stay; so our tooling and techniques simply have to level up to support this more flexibly defined world. ...  A developer approach to debugging prod means being able to isolate the impact of the code by endpoint, by function, by payload type, by response status, or by any other arbitrary metadata used to define a test case. 

Brain researchers get NVMe-over-RoCE for super-fast HPC storage

NVMe-over-fabrics is a storage protocol that allows NVMe solid-state drives (SSDs) to be treated as extensions of non-volatile memory connected via the server PCIe bus. It does away with the SCSI protocol as an intermediate layer, which tends to form a bottleneck, and so allows for flow rates several times faster compared to a traditionally connected array. NVMe using RoCE is an implementation of NVMe-over-Fabrics that uses pretty much standard Ethernet cables and switches. The benefit here is that this is an already-deployed infrastructure in a lot of office buildings. NVMe-over-RoCE doesn’t make use of TCP/IP layers. That’s distinct from NVMe-over-TCP, which is a little less performant and doesn’t allow for storage and network traffic to pass across the same connections. “At first, we could connect OpenFlex via network equipment that we had in place, which was 10Gbps. But it was getting old, so in a fairly short time we moved to 100Gbps, which allowed OpenFlex to flex its muscles,” says Vidal. ICM verified the feasibility of the deployment with its integration partner 2CRSi, which came up with the idea of implementing OpenFlex like a SAN in which the capacity would appear local to each workstation.

Understanding Zapier, the workflow automation platform for business

In addition to using Zapier to connect workflows, companies have turned to it for help during the COVID-19 pandemic. Foster said his company has helped smaller firms move their business online quickly, connecting and updating various applications such as CRM records.  “Many small business owners don’t have the technical expertise or someone on staff that can build these sites for them,” he said. “So they turn to no-code tools to create professional websites, and built automations with Zapier to reach new customers, manage inventory, and ensure leads didn’t slip through the cracks.” Saving employees time spent on repetitive tasks is a common benefit, said Andrew Davison, founder of Luhhu, a UK-based workflow automation consultancy and Zapier expert. He pointed to the amount of time wasted when workers have to key in the same data in different systems; that situation is only getting worse as businesses rely on more and more apps. “Zapier can eliminate this, meaning staffing costs can be reduced outright, or staff can be redeployed to more meaningful, growth-orientated work,” he said. “And human error with data entry is avoided — which can definitely be an important thing for some businesses in sensitive areas — like legal, for example.”

Microsoft's low-code tools: Now everyone can be a developer

Microsoft's new wave of low- and no-code tools in the Power Platform builds on this, providing tooling for UI construction, for business process automation, and for working with data. This fits in well with the current demographic shifts, with new entrant workers coming from the generation that grew up with open-world building games like Minecraft. Low-code tools might not look like Minecraft worlds, but they give users the same freedom to construct a work environment. There's a lot of demand, as Charles Lamanna, Microsoft CVP, Low Code Application Platform, notes: "Over 500 million new apps will be built during the next five years, which is more than all the apps built in the last 40 years." Most of those apps need to be low-code, as there's more than an app gap -- there's also a developer gap, as there's more demand for applications than there are developers to build that code. Much of that demand is being driven by a rapid, unexpected, digital transformation. People who suddenly find themselves working from home and outside the normal office environment need new tools to help manage what were often manual business processes. The asynchronous nature of modern business makes no-code tooling an easy way of delivering these new applications, as Lamanna notes: "It's kind of come into its own over the last year with the fastest period of adoption we've ever seen across the board from like a usage point of view, and that's just because of all these trends are coming to a head right now."

How to build the right data architecture for customer analytics

Whatever your tools, they’re only as good as the data that feeds them – so when building any data architecture, you need to pay attention to the foundations. Customer data platforms (CDPs) are the way to go for this, as they centralise, clean and consolidate all the data your business is collecting from thousands of touchpoints. They coordinate all of your different data sources – almost like the conductor in an orchestra – and channel that data to all the places you need it. As a central resource, a CDP eliminates data silos and ensures that every team across your company has live access to reliable, consistent information. CDPs can also segment customer data – sorting it into audiences and profiles – and most importantly, can easily integrate with the types of analytics or marketing tools already mentioned. CDPs are often seen as a more modern replacement for DMP (Data management platform) and CRM (customer relationship management) systems, which are unsuited to the multiplicity of digital customer touchpoints that businesses now have to deal with. ... When you have the basics in place, deep learning and artificial intelligence can allow you to go further. These cutting-edge applications learn from existing customer data to take the experience to the next level, for instance by automatically suggesting new offers based on past behaviour.

Staying Flexible With Hybrid Workplaces

Once employers start tracking the ways in which their teams communicate and learn, they can begin to find solutions to better spread that knowledge. For example, is most of the learning coming from an outdated employee handbook, or is there one person on the team that everyone goes to when there’s a question? Is that technology that you’re using causing more confusion - and do you see your team focusing on workarounds as opposed to the ideal solution?  Technology and tools should be our friends. And it’s in the best interest of your organization to understand how people use them. That way you can optimize the ones in place. Or find something that’s more suitable to your specific needs. If you see that your workforce is spending unneeded energy wrestling with clunky software. Or they bypass certain guidelines and processes for something simpler, then you have a disconnect. And this issue is only going to widen when your teams are driven apart by distance. Which will inevitably damage productivity, efficiency, and project success. Getting feedback from employees is the most effective way to uncover these learning processes. Whether this is done through internal surveys or in recurring check-ins. Through this feedback, you can weed out what isn’t working from what is.

Edge computing in hybrid cloud: 3 approaches

The edge tier is a small and inexpensive device that mounts on the motorcycle, which uses direct Bluetooth communication to connect with a dozen sensors on the bike, as well as a smartwatch that the rider wears to monitor biotelemetry. Finally, a Lidar-based scanner tracks other moving vehicles near the bike, including ones that are likely to be a threat. The data the edge device gathers is also responsible for real-time alerting for things such as speed, behavior, and direction of other close vehicles that are likely to put the rider at risk. This alerts the rider about hazardous road conditions and obstacles such as gravel or ice, as well as issues with the motorcycle itself, such as overheated brakes that may take longer to stop, a lean angle that's too aggressive for your current speed, and hundreds of other conditions that will generate alerts to the rider to avoid accidents. Moreover, the edge device will alert the rider if heart rate, blood pressure, or other vitals exceed a threshold. Keep in mind that you need the edge device here to deal instantaneously with data such as speed, blood pressure, the truck about to rear-end the rider, and so on. However, it makes sense to transmit the data to a public cloud for deeper processing—for example, the ability to understand emerging patterns that may lead up to an accident, or even bike maintenance issues that could lead to a dangerous situation.

Using Agile with a Data Science Team

The idea for applying agile to data science was that all four steps would be completed in each sprint and there would be a demo at the end. When applied this way, they could understand together if the agile model was feasible or not. Satti conducted agile ways of working sessions with the team to teach them the importance of collaboration, interactions, respect, ownership, improvement, learning cycles and delivering value. The team had to go through a cultural and mind shift change because they believed that agile in data science would only work if data scientists understood and trusted the advantages of agile, Satti said. The main benefit of introducing agile to the team was that they saw an immediate increase in productivity, as the team members were clear on their priorities and were able to focus on the specific task, Satti said. Due to this, the team was able to commit to deliverables and timelines. Most of the time the committed deadlines were met, making the stakeholders happy, hence increasing the confidence in the team. Having the buy-in of their Data Science team was quite crucial and they had to be taken through a journey of agile instead of forcing it on them, Satti mentioned. 

Quote for the day:

"Blessed are the people whose leaders can look destiny in the eye without flinching but also without attempting to play God" -- Henry Kissinger

Daily Tech Digest - November 25, 2020

To do in 2021: Get up to speed with quantum computing 101

For business leaders who are new to quantum computing, the overarching question is whether to invest the time and effort required to develop a quantum strategy, Savoie wrote in a recent column for Forbes. The business advantages could be significant, but developing this expertise is expensive and the ROI is still long term. Understanding early use cases for the technology can inform this decision. Savoie said that one early use for quantum computing is optimization problems, such as the classic traveling salesman problem of trying to find the shortest route that connects multiple cities. "Optimization problems hold enormous importance for finance, where quantum can be used to model complex financial problems with millions of variables, for instance to make stock market predictions and optimize portfolios," he said. Savoie said that one of the most valuable applications for quantum computing is to create synthetic data to fill gaps in data used to train machine learning models. "For example, augmenting training data in this way could improve the ability of machine learning models to detect rare cancers or model rare events, such as pandemics," he said. 

SmartKey And Chainlink To Collaborate In Govt-Approved Blockchain Project

Chainlink is the missing link in developing and delivering a virtually limitless number of smart city integrations that combine SmartKey’s API and blockchain-enabled hardware with real world data and systems to harness the power of automated data-driven IoT applications with tangible value. The two protocols are complementary: The SmartKey protocol manages access to different physical devices across the Blockchain of Things (BoT) space (e.g. opening a gate), while the Chainlink Network allows developers to connect SmartKey functionalities with different sources of data (e.g. weather data, user web apps). The integration focuses on connecting all the data and events sourced and delivered by the Chainlink ecosystem to the SmartKey connector, which then turns that data (commands issued by Ethereum smart contracts) into instructions for IoT devices (e.g., active sensors GSM — GPS). Our connectors can also deliver information to Chainlink oracles confirming these real world instructions were carried out (e.g. gate was opened), potentially leading to additional smart contract outputs. The confirmation of service delivery is a “contract key” that connects both ecosystems into one “world” and relays an Ethereum action to IoT devices.

DevOps + Serverless = Event Driven Automation

For the most part, Serverless is seen as Function as a Service (FaaS). While it is definitely true that most Serverless code being implemented today is FaaS, that’s not the destination, but the pitstop. The Serverless space is still evolving. Let’s take a journey and explore how far Serverless has come, and where it is going. Our industry started with what I call “Phase 1.0”, when we just started talking or hearing about Serverless, and for the most part just thought about it as Functions – small snippets of code running on demand and for a short period of time. AWS Lambda made this paradigm very popular, but it had its own limitations around execution time, protocols, and poor local development experience. Since then, more people have realized that the same serverless traits and benefits could be applied to microservices and Linux containers. This leads us into what I’m calling the “Phase 1.5”. Some solutions here completely abstract Kubernetes, delivering the serverless experience through an abstraction layer that sits on top of it, like Knative. By opening up Serverless to containers, users are not limited to function runtimes and can now use any programming language they want.

Self-documenting Architecture

A self-documenting architecture would reduce the learning curve. It would accentuate poor design choices and help us to make better ones. It would help us to see the complexity we are adding to the big picture as we make changes in the small and help us to keep complexity lower. And it would save us from messy whiteboard diagrams that explain how one person incorrectly thinks the system works . ... As software systems gradually evolve on a continual basis, individual decisions may appear to make sense in isolation, but from a big picture architectural perspective those changes may add unnecessary complexity to the system. With a self-documenting architecture, everybody who makes changes to the system can easily zoom out to the bigger picture and consider the wider implications of their changes. One of the reasons I use the Bounded Context Canvas is because it visualises all of the key design decisions for an individual service. Problems with inconsistent naming, poorly-defined boundaries, or highly-coupled public interfaces jump out at you. When these decisions are made in isolation they seem OK, it is only when considered in the bigger picture that the overall design appears sub-optimal.

Is graph technology the fuel that’s missing for data-based government?

Another government context for use of graphs is global smart city projects. For instance, in Turku, Finland, graph databases are being deployed to leverage IoT data to make better decisions about urban planning. According to Jussi Vira, CEO of Turku City Data, the IT services company that is assisting the city of Turku to achieve its ideas: “A lack of clear ways to bridge the gap between data and business problems was inhibiting our ability to innovate and generate value from data”. By deploying graphs, his team is able to represent many real-world business problems as people, objects, locations and events, and their interrelationships. Turku City Data found graphs represent data in the same way in which business problems are described, so it was easier to match relevant datasets to concrete business problems. Adopting graph technology has enabled the city of Turku to deliver daily supplies to elderly citizens who cannot leave their homes because of the Covid-19 pandemic. The service determines routes through the city that optimise delivery speed and minimise transportation resources while maintaining unbroken temperature-controlled shipping requirements for foodstuffs and sensitive medication. 

The Relationship Between Software Architecture And Business Models (and more)

A software architecture has to implement the domain concepts in order to deliver the how of the business model. There are an unlimited number of ways to model a business domain, however. It is not a deterministic, sequential process. A large domain must be decomposed into software sub-systems. Where should the boundaries be? Which responsibilities should live in each sub-system? There are many choices to make and the arbiter is the business model. A software architecture, therefore, is an opinionated model of the business domain which is biased towards maximising the business model. When software systems align poorly with the business domain, changes become harder and the business model is less successful. When developers have to mentally translate from business language to the words in code it takes longer and mistakes are more likely. When new pieces of work always slice across-multiple sub-systems, it takes longer to make changes and deploy them. It is, therefore, fundamentally important to align the architecture and the domain as well as possible. 

In 2021, edge computing will hit an inflection point

Data center marketplaces will emerge as a new edge hosting option. When people talk about the location of "the edge," their descriptions vary widely. Regardless of your own definition, edge computing technology needs to sit as close to "the action" as possible. It may be a factory floor, a hospital room, or a North Sea oil rig. In some cases, it can be in a data center off premises but still as close to the action as makes sense. This rules out many of the big data centers run by cloud providers or co-location services that are close to major population centers. If your enterprise is highly distributed, those centers are too far. We see a promising new option emerging that unites smaller, more local data centers in a cooperative marketplace model. New data center aggregators such as Edgevana and Inflect allow you to think globally and act locally, expanding your geographic technology footprint. They don't necessarily replace public cloud, content delivery networks, or traditional co-location services — in fact, they will likely enhance these services. These marketplaces are nascent in 2020 but will become a viable model for edge computing in 2021.

Why Security Awareness Training Should Be Backed by Security by Design

The concepts of "safe by design" or "secure by design" are well-established psychological enablers of behavior. For example, regulators and technical architects across the automobile and airlines industries prioritize safety above all else. "This has to emanate across the entire ecosystem, from the seatbelts in vehicles, to traffic lights, to stringent exams for drivers," says Daniel Norman, senior solutions analyst for ISF and author of the report. "This ecosystem is designed in a way where an individual's ability to behave insecurely is reduced, and if an unsafe behavior is performed, then the impacts are minimized by robust controls." As he explains, these principles of security by design can translate to cybersecurity in a number of ways, including how applications, tools, policies, and procedures are all designed. The goal is to provide every employee role "with an easy, efficient route toward good behavior." This means sometimes changing the physical office environment or the digital user interface (UI) environment. For example, security by design to improve phishing susceptibility might include implementing easy-to-use phishing reporting buttons within employee email clients. Similarly, it might mean creating colorful pop-ups in email platforms to remind users not to send confidential information.

Tech Should Enable Change, Not Drive It

Technology should remove friction and allow people to do their jobs, while enabling speed and agility. This means ensuring a culture of connectivity where there is trust, free-flowing ideation, and the ability to collaborate seamlessly. Technology can also remove interpersonal friction, by helping to build trust and transparency — for example, blockchain and analytics can help make corporate records more trustworthy, permitting easy access for regulators and auditors that may enhance trust inside and outside the organization. This is important; one study found that transparency from management is directly proportional to employee happiness. And happy employees are more productive employees. Technology should also save employees time, freeing them up to take advantage of opportunities for human engagement (or, in a pandemic scenario, enabling virtual engagement), as well as allowing people to focus on higher-value tasks. ... It’s vital that businesses recognize diversity and inclusion as a moral and a business imperative, and act on it. Diversity can boost creativity and innovation, improve brand reputation, increase employee morale and retention, and lead to greater innovation and financial performance.

Researchers bring deep learning to IoT devices

The customized nature of TinyNAS means it can generate compact neural networks with the best possible performance for a given microcontroller – with no unnecessary parameters. “Then we deliver the final, efficient model to the microcontroller,” say Lin. To run that tiny neural network, a microcontroller also needs a lean inference engine. A typical inference engine carries some dead weight – instructions for tasks it may rarely run. The extra code poses no problem for a laptop or smartphone, but it could easily overwhelm a microcontroller. “It doesn’t have off-chip memory, and it doesn’t have a disk,” says Han. “Everything put together is just one megabyte of flash, so we have to really carefully manage such a small resource.” Cue TinyEngine. The researchers developed their inference engine in conjunction with TinyNAS. TinyEngine generates the essential code necessary to run TinyNAS’ customized neural network. Any deadweight code is discarded, which cuts down on compile-time. “We keep only what we need,” says Han. “And since we designed the neural network, we know exactly what we need. That’s the advantage of system-algorithm codesign.”

Quote for the day:

"Empowerment is the magic wand that turns a frog into a prince. Never estimate the power of the people, through true empowerment great leaders are born." -- Lama S. Bowen