Daily Tech Digest - October 21, 2020

6 tips for CIOs managing technical debt

Many applications are created to solve a specific business problem that exists in the here-and-now, without thought about how that problem will evolve or what other adjacencies it pertains to. For example, a development team might jump into solving the problem of creating a database to manage customer accounts without taking into consideration how that database is integrated with the sales/prospecting database. This can lead to thousands of staff-hours downstream spent transforming contacts and importing them from the sales to the customer database. ... One of the best-known problems in large organizations is the disconnect between development and operations where engineers design a product without first considering how their peers in operations will support it, thus resulting in support processes that are cumbersome, error-prone and inefficient. The entire programming discipline of DevOps exists in large part to resolve this problem by including representatives from the operations team on the development team -- but the DevOps split exists outside programming. Infrastructure engineers may roll out routers, edge computers or SD-WAN devices without knowing how the devices will be patched or upgraded.

The Third Wave of Open Source Migration

The first and second open-source migration waves were periods of rapid expansion for companies that rose up to provide commercial assurances for Linux and the open-source databases, like Red Hat, MongoDB, and Cloudera. Or platforms that made it easier to host open source workloads in a reliable, consistent, and flexible manner via the cloud, like Amazon Web Services, Google Cloud, and Microsoft Azure. This trend will continue in the third wave of open source migration, as organizations interested in reducing cost without sacrificing development speed will look to migrate more of their applications to open source. They’ll need a new breed of vendor—akin to Red Hat or AWS—to provide the commercial assurances they need to do it safely.  It’s been hard to be optimistic over the last few months. But as I look for a silver lining in the current crisis, I believe there is an enormous opportunity for organizations to get even more nimble in their use of open source. The last 20+ years of technology history have shown that open source is a powerful weapon organizations can use to navigate a global downturn.

It’s Time to Implement Fair and Ethical AI

Companies have gotten the message that artificial intelligence should be implemented in a manner that is fair and ethical. In fact, a recent study from Deloitte indicates that a majority of companies have actually slowed down their AI implementations to make sure these requirements are met. But the next step is the most difficult one: actually implementing AI in a fair and ethical way. A Deloitte study from late 2019 and early 2020 found that 95% of executives surveyed said they were concerned about ethical risk in AI adoption. While machine learning brings the possibility to improve the quantity and quality of decision-making based on data, it also brings the potential for companies to damage their brand and reduce the trust that customers have placed in it if AI is implemented poorly. In fact, these risks were so palpable to executives that 56% of them say they have slowed down their AI adoptions, according to Deloitte’s study. While progress has been made in getting the message out about fair and ethical AI, there is still a lot of work to be done, says Beena Ammanath, the executive director of the Deloitte AI Institute. “The first step is well underway, raising awareness. Now I think most companies are aware of the risk associated” with AI deployments, Ammanath says.

C# designer Torgersen: Why the programming language is still so popular and where it's going next

Like all modern programming languages, C# continues to evolve. With C# 9.0 on course to arrive in November, the next update will focus on supporting "terse and immutable" (i.e. unchangeable) representation of data shapes. "C# 9.0 is trying to take some next steps for C# in making it easier to deal with data that comes over the wire, and to express the right semantics for data, if you will, that comes out of what we call an object-oriented paradigm originally," says Torgersen. C# 9.0 takes the next step in that direction with a feature called Records, says Torgersen. These are a reference type that allow a whole object to be immutable and instead make it act like a value. "We've found ourselves, for a long time now, borrowing ideas from functional programming to supplement the object-oriented programming in a way that really helps with, for instance, cloud-oriented programming, and helps with data manipulation," Torgersen explains. "Records is a key feature of C# 9.0 that will help with that." Beyond C# 9.0 is where things get more theoretical, though. Torgersen insists that there's no concrete 'endgame' for the programming language – or at least, not until it finally reaches some as-yet unknown expiration date.

DOJ's antitrust fight with Google: how we got here

The DOJ said in its filing that this case is "just beginning." The government also says it's seeking to change Google's practices and that "nothing is off the table" when it comes to undoing the "harm" caused by more than a decade of anticompetitive business. Is it hard to compete with Google? The numbers speak for themselves. But that's because the company is darn good at what it does. Does Google use your data to help it improve search and advertising? Yes, it does. But this suit is not about privacy. It's about Google's lucrative advertising business. Just two years ago, the European Commission (EC) fined Google over €8 billion for various advertising violations. Though the DOJ is taking a similar tack, Google has done away with its most egregious requirements. These included exclusivity clauses, which stopped companies from placing competitors' search advertisements on their results pages and Premium Placement, which reserved the most valuable page real estate for Google AdSense ads. It's also true that Google has gotten much more aggressive about using its own search pages to hawk its own preferred partners. As The Washington Post's Geoffrey A. Fowler recently pointed out: if you search for "T Shirts" on Google, the first real search result appears not on row one, two, or three — those are reserved for advertising — or even rows four through eight.

7 Hard-Earned Lessons Learned Migrating a Monolith to Microservices

It’s tempting to go from legacy right to the bleeding edge. And it’s an understandable urge. You’re seeking to future-proof this time around so that you won’t face another refactor again anytime soon. But I’d urge caution in this regard, and to consider taking an established route. Otherwise, you may find yourself wrangling two problems at once, and getting caught in a fresh new rabbit hole. Most companies can’t afford to pioneer new technology and the ones that can tend to do it outside of any critical path for the business. ... For all its limitations, a monolithic architecture does have several intrinsic benefits. One of which is that it’s generally simple. You have a single pipeline and a single set of development tools. Venturing into a distributed architecture involves a lot of additional complexity, and there are lots of moving parts to consider, particularly if this is your first time doing it. You’ll need to compose a set of tools to make the developer experience palatable, possibly write some of your own, (although I’d caution against this if you can avoid it), and factor in the discovery and learning process for all that as well.

What is confidential computing? How can you use it?

To deliver on the promise of confidential computing, customers need to take advantage of security technology offered by modern, high-performance CPUs, which is why Google Cloud’s Confidential VMs run on N2D series VMs powered by 2nd Gen AMD EPYC processors. To support these environments, we also had to update our own hypervisor and low-level platform stack while also working closely with the open source Linux community and modern operating system distributors to ensure that they can support the technology. Networking and storage drivers are also critical to the deployment of secure workloads and we had to ensure we were capable of handling confidential computing traffic. ... With workforces dispersed, confidential computing can help organizations collaborate on sensitive workloads in the cloud across geographies and competitors, all while preserving privacy of confidential datasets. This can lead to the development of transformation technologies – imagine, for example, being able to more quickly build vaccines and cure diseases as a result of this secure collaboration.

What A CIO Wants You to Know About IT Decision Making

CIOs know the organization needs new ideas, new products, new services, etc. as well as changes to current rules, regulations, and business processes to grow markets and stay ahead of competition. CIOs also know that the rules, regulations, and processes are the foundations of trust. Those things that seem to inhibit new ideas are the things that open customer’s minds to the next new thing an organization might offer. Without the trust established by following the rules, adhering to regulations, and at the far extreme, simply obeying the law, customers would not stick around to try the next new thing. For proof, look at the stock price of organizations that publicly announce IT hacks, data loss, or other trust breaking events. Customers leave when trust is broken, and part of the CIO’s role is to maintain that trust. While CIOs know the standards that must be upheld, they also know how to navigate those standards to support new ideas and change requests. Supporting new ideas and adapting to change requires input from you as the user, the employee or another member of the IT department, beyond just submitting the IT change form or other automated process.

The Biggest Reason Not to Go All In on Kubernetes

Here’s the big thing that gets missed when a huge company open-sources their internal tooling – you’re most likely not on their scale. You don’t have the same resources, or the same problems as that huge company. Sure, you are working your hardest to make your company so big that you have the same scaling problems as Google, but you’re probably not there yet. Don’t get me wrong: I love when large enterprises open-source some of their internal tooling, as it’s beneficial to the open-source community and it’s a great learning opportunity, but I have to remind myself that they are solving a fundamentally different problem than I am. While I’m not suggesting that you avoid planning ahead for scalability, getting something like Kubernetes set up and configured instead of developing your main business application can waste valuable time and funds. There is a considerable time and overhead investment for getting your operations team up to speed on Kubernetes that may not pay out. Google can afford to have its teams learning, deploying, and managing new technology. But especially for smaller organizations, premature scaling or premature optimization are legitimate concerns. You may be attracted to the scalability, and it’s exciting. But, if you implement too early, you will only get the complexity without any of the benefit.

Did Domain Driven Design help me to ease out the approach towards Event Driven Architecture?

The most important aspect of the Domain Driven Design is setting the context of a domain/sub-domain where domain would be a very high-level segregation for different areas of business and sub-domain would be a particular part in the domain representing a structure where users use a specific ubiquitous language with domain model. Without going into much detail of the DDD, another paradigm that one should be aware of is context mapping which consists of identifying and classifying the relationships between bounded contexts within the domain. One or more contexts can be related to each other in terms of goals, reuse components (codes), a consumer and a producer. ... The principles guiding the conglomeration of DDD and events help us to shift the focus from the nouns (the domain objects) to the verbs (the events) in the domain. Focusing on flow of events helps us to understand how change propagates in the system — things like communication patterns, workflow, figuring out who is talking to whom, who is responsible for what data, and so on. Events represent facts about the domain and should be part of the Ubiquitous Language of the domain. 

Quote for the day:

“The only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle.” -- Steve Jobs

Daily Tech Digest - October 20, 2020

Five ways the pandemic has changed compliance—perhaps permanently

There is a strong acknowledgment that compliance will be forced to rely heavily on technology to ensure an adequate level of visibility to emerging issues. We need to strategically leverage technology and efficient systems to monitor risk. This is causing some speculation that a greater skills overlap will be required of CCO and CISO roles. This, however, also raises privacy concerns. Taylor believes the remote environment will lead to “exponential growth” in employee surveillance and that compliance officers will need to tread carefully given that this can undermine ethical culture: “Just because the tools exist, doesn’t mean you have to use them,” she says. Compliance veteran and advisor Keith Darcy predicts dynamic and continuous risk assessment—one that considers “the rapidly deteriorating and changing business conditions. ‘One-and-done’ assessments are completely inadequate.” Some predict that investigation interviews conducted on video conference and remote auditing will become the norm. Others are concerned that policies cannot be monitored or enforced without being in the office together; that compliance will be “out of sight, out of mind” to some degree. Communication must be a top priority for compliance, as the reduction of informal contacts with stakeholders and employees makes effectiveness more challenging.

In the Search of Code Quality

In general functional and statically typed languages were less error-prone than dynamically typed, scripting, or procedural languages. Interestingly defect types correlated stronger with language than the number of defects. In general, the results were not surprising, confirming what the majority of the community believed to be true. The study got popularity and was extensively cited. There is one caveat, the results were statistical and interpreting statistical results one must be careful. Statistical significance does not always entail practical significance and, as the authors rightfully warn, correlation is not causation. The results of the study do not imply (although many readers have interpreted it in such a way) that if you change C to Haskell you will have fewer bugs in the code. Anyway, the paper at least provided data-backed arguments. But that’s not the end of the story. As one of the cornerstones of the scientific method is replication, a team of researchers tried to replicate the study from 2016. The result, after correcting some methodological shortcomings found in the original paper, was published in 2019 in the paper On the Impact of Programming Languages on Code Quality A Reproduction Study.

3 unexpected predictions for cloud computing next year

With more than 90 percent of enterprises using multicloud, there is a need for intercloud orchestration. The capability to bind resources together in a larger process that spans public cloud providers is vital. Invoking application and database APIs that span clouds in sequence can solve a specific business problem; for example, inventory reorder points based on a common process between two systems that exist in different clouds. Emerging technology has attempted to fill this gap, such as cloud management platforms and cloud service brokers. However, they have fallen short. They only provide resource management between cloud brands, typically not addressing the larger intercloud resource and process binding. This a gap that innovative startups are moving to fill. Moreover, if the public cloud providers want to truly protect their market share, they may want to address this problem as well. Second: cloudops automation with prebuilt corrective behaviors. Self-healing is a feature where a tool can take automated corrective action to restore systems to operation. However, you have to build these behaviors yourself, including automations, or wait as the tool learns over time. We’ve all seen the growth of AIops, and the future is that these behaviors will come prebuilt with pre-existing knowledge that can operate distributed or centralized. 

How Organizations Can Build Analytics Agility

Data and analytics leaders must frame investments in the current context and prioritize data investments wisely by taking a complete view of what is happening to the business across a number of functions. For example, customers bank very differently in a time of crisis, and this requires banks to change how they operate in order to accommodate them. The COVID-19 pandemic forced banks to take another look at the multiple channels their customers traverse — branches, mobile, online banking, ATMs — and how their comfort levels with each shifted. How customers bank, and what journeys they engage in at what times and in what sequence, are all highly relevant to helping them achieve their financial goals. The rapid collection and analysis of data from across channels, paired with key economic factors, provided context that allowed banks to better serve customers in the moment. New and different sources of information — be it transaction-level data, payment behaviors, or real-time credit bureau information — can help ensure that customer credit is protected and that fraudulent activity is kept at bay. Making the business case for data investments suddenly makes sense as business leaders live through data gap implications in real time.

Cisco targets WAN edge with new router family

The platform makes it possible to create a fully software-defined branch, including connectivity, edge compute, and storage. Compute and switching capabilities can be added via UCS-E Series blades and UADP-powered switch modules. Application hosting is supported using containers running on the Catalyst 8300’s multi-core, high-performance x86 processor, according JL Valente, vice president of product management for Cisco’s Intent-Based Networking Group in a blog about the new gear. Cisco said the Catalyst 8000V Edge Software is a virtual routing platform that can run on any x86 platform, or on Cisco’s Enterprise Network Compute System or appliance in a private or public cloud. Depending on what features customers need. the new family supports Cisco SD-WAN software, including Umbrella security software and Cisco Cloud On-Ramp that lets customers tie distributed cloud applications from AWS, Microsoft and Google back to a branch office or private data center. The platforms produce telemetry that can be used in Cisco vAnalytics to provide insights into device and fabric performance as well as spot anomalies in the network and perform capacity planning.

2021 Will Be the Year of Catch-Up

With renewed focus on technology to bring about the changes needed, it’s crucial that organizations recognize that infrastructure must be secure. Our new office environment is anywhere we can find a connection to Wi-Fi, and that opens many more doors to cyber-attacks. The rapid shift in business operations significantly impacted the cyberthreat landscape – as companies fast-tracked the migration of digital assets to the cloud, they also inadvertently increased the attack surfaces from which hackers can try to gain access to their data and applications. C-suite executives are moving quickly with network plans to support exploding customer and supplier demand for contactless interactions and the unplanned need to connect a remote workforce, yet they are also aware that they are not fully prepared to adequately protect their organizations from unknown threats. The situation is further compounded by the cloud shared responsibility model, which says that cloud service providers are responsible for the security of the cloud while customers are responsible for securing the data they put into the cloud. Many organizations rely on their third-party providers to certify security management services, but the decentralized nature of this model can add complexity to how applications and computing resources are secured.

BA breach penalty sets new GDPR precedents

The reduction in the fine also adds fuel to the ongoing class action lawsuit against BA, said Long at Lewis Silkin. “Completely separate from the £20m fine by the ICO, British Airways customers, and indeed any staff impacted, are likely to be entitled to compensation for any loss they have suffered, any distress and inconvenience they have suffered, and indeed possibly any loss of control over their data they have suffered,” she said. “This might only be £500 a pop but if only 20,000 people claim that is another potential £10m hit, and if 100,000 then £50m. So whilst a win today, this is very much only round one for BA.” Darren Wray, co-founder and CTO of privacy specialist Guardum, said it was easy to imagine many of the breach’s actual victims would be put out by the ICO’s decision. “Many will feel their data and their fight to recover any financial losses resulting from the airline’s inability to keep their data safe has been somewhat marginalised,” he said. “This can only strengthen the case of the group pursuing a class action case against BA. The GDPR and the UK DPA 2018 do after all allow for such action and if the regulator isn’t seen as enforcing the rules strongly enough, it leaves those whose data was lost few alternative options,” said Wray.

Is Artificial Intelligence Closer to Common Sense?

COMET relies on surface patterns in its training data rather than understanding concepts. The key idea would be to supply surface patterns with more information outside of language such as visual perceptions or embodied sensations. First person representations, not language, would be the basis for common sense. Ellie Pavlick is attempting to teach intelligent agents common sense by having them interact with virtual reality. Pavlick notes that common sense would still exist even without the ability to talk to other people. Presumably, humans were using common sense to understand the world before they were communicating. The idea is to teach intelligent agents to interact with the world the way a child does. Instead of associating the idea of eating with a textual description, an intelligent agent would be told, “We are now going to eat,” and then it would see the associated actions such as, gathering food from the refrigerator, preparing the meal, and then see its consumption. Concept and action would be associated with each other. It could then generate similar words when seeing similar actions. Nazneen Rajani is investigating whether language models can reason using basic physics. For example, if a ball is inside a jar, and the jar is tipped over, the ball will fall out.

Russia planned cyber-attack on Tokyo Olympics, says UK

The UK is the first government to confirm details of the breadth of a previously reported Russian attempt to disrupt the 2018 winter Olympics and Paralympics in Pyeongchang, South Korea. It declared with what it described as 95% confidence that the disruption of both the winter and summer Olympics was carried out remotely by the GRU unit 74455. In Pyeongchang, according to the UK, the GRU’s cyber-unit attempted to disguise itself as North Korean and Chinese hackers when it targeted the opening ceremony of the 2018 winter Games, crashing the website so spectators could not print out tickets and crashing the wifi in the stadium. The key targets also included broadcasters, a ski resort, Olympic officials, service providers and sponsors of the games in 2018, meaning the objects of the attacks were not just in Korea. The GRU also deployed data-deletion malware against the winter Games IT systems and targeted devices across South Korea using VPNFilter malware. The UK assumes that the reconnaissance work for the summer Olympics – including spearphishing to gather key account details, setting up fake websites and researching individual account security – was designed to mount the same form of disruption, making the Games a logistical nightmare for business, spectators and athletes.

What intelligent workload balancing means for RPA

“To be truly effective, a bot must be able to work across a wide set of parameters. Let’s say, for example, a rule involves a bot to complete work for goods returned that are less than $100 in value, but during peak times when returns are high, the rules may dynamically change the threshold to a higher number. The bot should still be able to perform all the necessary steps for that amount of approval without having to be reconfigured every time.” Gopal Ramasubramanian, senior director, intelligent automation & technology at Cognizant, added: “If there are 100,000 transactions that need to be performed and instead of manually assigning transactions to different robots, the intelligent workload balancing feature of the RPA platform will automatically distribute the 100,000 transactions across different robots and ensure transactions are completed as soon as possible. “If a service level agreement (SLA) is tied to the completion of these transactions and the robots will not be able to meet the SLA, intelligent workload balancing can also commission additional robots on demand to distribute the workload and ensure any given task is completed on time.”

Quote for the day:

"You can build a throne with bayonets, but you can_t sit on it for long." -- Boris Yeltsin

Daily Tech Digest - October 19, 2020

How To Build Out a Successful Multi-Cloud Strategy

While a multi-cloud approach can deliver serious value in terms of resiliency, flexibility and cost savings, making sure you’re choosing the right providers requires a comprehensive assessment. Luckily, all main cloud vendors offer free trial services so you can establish which ones best fit your needs and see how they work with each other. It will pay to conduct proofs-of-concept using the free trials and run your data and code on each provider. You also need to make sure that you’re able to move your data and code around easily during the trials. It’s also important to remember that each cloud provider has different strengths—one company’s best option is not necessarily the best choice for you. For example, if your startup is heavily reliant on running artificial intelligence (AI) and machine learning (ML) applications, you might opt for Google Cloud’s AI open source platform. Or perhaps you require an international network of data centers, minimal latency and data privacy compliance for certain geographies for your globally used app. Here’s where AWS could step in. On the other hand, you might need your cloud applications to seamlessly integrate with the various Microsoft tools that you already use. This would make the case for Microsoft Azure.

How data and technology can strengthen company culture

Remote working exposed another potential weakness holding back teams from realising their potential – employee expertise that isn’t being shared. Under the lockdown, many companies realised that knowledge and experience within their workforce were highly concentrated within specific offices, regions, teams, or employees. How can these valuable insights be shared seamlessly across internal networks? A truly collaborative company culture must go beyond limited solutions, such as excessive video calls, which run the risk of burning people out. Collaboration tools that support culture have to be chosen based on their effectiveness at improving interactions, bridging gaps and simplifying knowledge sharing. ... While revamping strategies in recent months, many companies have started to prioritise customer retention and expansion over new customer acquisition, given the state of the economy. Data and technology can help employees adapt to this transition. Investing in tools that empower employees gives them the confidence, knowledge and skills they need to deliver maximum customer value. This in turn boosts customer satisfaction as staff deliver an engaging and consistent experience each time they connect.

Cloud environment complexity has surpassed human ability to manage

“The benefits of IT and business automation extend far beyond cost savings. Organizations need this capability – to drive revenue, stay connected with customers, and keep employees productive – or they face extinction,” said Bernd Greifeneder, CTO at Dynatrace. “Increased automation enables digital teams to take full advantage of the ever-growing volume and variety of observability data from their increasingly complex, multicloud, containerized environments. With the right observability platform, teams can turn this data into actionable answers, driving a cultural change across the organization and freeing up their scarce engineering resources to focus on what matters most – customers and the business.” ... 93% of CIOs said AI-assistance will be critical to IT’s ability to cope with increasing workloads and deliver maximum value to the business. CIOs expect automation in cloud and IT operations will reduce the amount of time spent ‘keeping the lights on’ by 38%, saving organizations $2 million per year, on average. Despite this advantage, just 19% of all repeatable operations processes for digital experience management and observability have been automated. “History has shown successful organizations use disruptive moments to their advantage,” added Greifeneder.

A New Risk Vector: The Enterprise of Things

The ultimate goal should be the implementation of a process for formal review of cybersecurity risk and readout to the governance, risk, and compliance (GRC) and audit committee. Each of these steps must be undertaken on an ongoing basis, instead of being viewed as a point-in-time exercise. Today's cybersecurity landscape, with new technologies and evolving adversary trade craft, demands a continuous review of risk by boards, as well as the constant re-evaluation of the security budget allocation against rising risk areas. to ensure that every dollar spent on cybersecurity directly buys down those areas of greatest risk.  We are beginning to see some positive trends in this direction. Nearly every large public company board of directors today has made cyber-risk an element either of the audit committee, risk committee, or safety and security committee. The CISO is also getting visibility at the board level, in many cases presenting at least once if not multiple times a year. Meanwhile, shareholders are beginning to ask the tough questions during annual meetings about what cybersecurity measures are being implemented. In today's landscape, each of these conversations about cyber-risk at the board level must include a discussion about the Enterprise of Things given the materiality of risk.

FreedomFI: Do-it-yourself open-source 5G networking

FreedomFi offers a couple of options to get started with open-source private cellular through their website. All proceeds will be reinvested towards building up the Magma's project open-source software code. Sponsors contributing $300 towards the project will receive a beta FreedomFi gateway and limited, free access to the Citizens Broadband Radio Service (CBRS) shared spectrum in the 3.5 GHz "innovation band." Despite the name "good-buddy," CBRS has nothing to do with the CB radio service used by amateurs and truckers for two-way voice communications. CB lives on in the United States in the 27MHz band. Those contributing at $1,000 dollars will get support with a "network up" guarantee, offering FreedomFi guidance over a series of Zoom sessions. The company guarantees they won't give up until you get a connection. FreedomFi will be demonstrating an end-to-end private cellular network deployment during their upcoming keynote at the Open Infrastructure Summit and publishing step-by-step instructions on the company blog. This isn't just a hopeful idea being launched on a wing and a prayer. WiConnect Wireless is already working with it. "We operate hundreds of towers, providing fixed wireless access in rural areas of Wisconsin," said Dave Bagett, WiConnect's president.

Why We Must Unshackle AI From the Boundaries of Human Knowledge

Artificial intelligence (AI) has made astonishing progress in the last decade. AI can now drive cars, diagnose diseases from medical images, recommend movies, even whom you should date, make investment decisions, and create art that people have sold at auction. A lot of research today, however, focuses on teaching AI to do things the way we do them. For example, computer vision and natural language processing – two of the hottest research areas in the field – deal with building AI models that can see like humans and use language like humans. But instead of teaching computers to imitate human thought, the time has now come to let them evolve on their own, so instead of becoming like us, they have a chance to become better than us. Supervised learning has thus far been the most common approach to machine learning, where algorithms learn from datasets containing pairs of samples and labels. For example, consider a dataset of enquiries (not conversions) for an insurance website with information about a person’s age, occupation, city, income, etc. plus a label indicating whether the person eventually purchased the insurance.

The Race for Intelligent AI

The key takeaway is that fundamentally BERT and GPT3 have the same structure in terms of information flow. Although attention layers in transformers can distribute information in a way that a normal neural network layer cannot, it still retains the fundamental property that it passes forward information from input to output. The first problem with feed forward neural nets is that they are inefficient. When processing information, the processing chain can often be broken down into multiple small repetitive tasks. For example, addition is a cyclical process, where single digit adders, or in a binary system full adders, can be used together to compute the final result. In a linear information system, to add three numbers there would have to be three adders chained together; this is not efficient, especially for neural networks, which would have to learn each adder unit. This is inefficient when it is possible to learn one unit and reuse it. This is also not how back propagation tends to learn, the neural network would try to create a hierarchical decomposition of the process, which in this case would not ‘scale’ to more digits. Another issue with using feed forward neural networks to simulate “human level intelligence” is thinking. Thinking is an optimization process.

Why Agile Transformations sometimes fail

The attitude regarding Agile adoption that is represented by top management impacts the whole process. Disengaged management is the most common reason for my ranking. There were not too many examples in my career when the bottom-up Agile transformation was successful. Usually, top management is at some point aware of Agile activities in the company, Scrum adoption, although they leave it to the teams. One of the frequent reasons for this behavior is that top management is not acquainted with the understanding that Agility is beneficial for business, product, and the most important – customers/users. They consider Agile, Scrum and Lean to be things that might improve delivery and teams' productivity. Let's imagine the situation when a large number of Scrum Masters reported the same impediment. It becomes an organizational impediment. How would you resolve it when decision-makers are not interested in it? What would be the impact on teams' engagement and product delivery? Active management that fosters Agility and empirical way, and actively participates in the whole process is a secret ingredient that makes the transition more realistic. Another observation I have made is a strong focus on delivery, productivity, technology and effectiveness.

The encryption war is on again, and this time government has a new strategy

So what's going on here? Adding two new countries -- Japan and India -- the statement suggests that more governments are getting worried, but the tone is slightly different now. Perhaps governments are trying a less direct approach this time, and hoping to put pressure on tech companies in a different way. "I find it interesting that the rhetoric has softened slightly," says Professor Alan Woodward of the University of Surrey. "They are no longer saying 'do something or else'". What this note tries to do is put the ball firmly back in the tech companies' court, Woodward says, by implying that big tech is putting people at risk by not acceding to their demands -- a potentially effective tactic in building a public consensus against the tech companies. "It seems extraordinary that we're having this discussion yet again, but I think that the politicians feel they are gathering a head of steam with which to put pressure on the big tech companies," he says. Even if police and intelligence agencies can't always get encrypted messages from tech companies, they certainly aren't without other powers. The UK recently passed legislation giving law enforcement wide-ranging powers to hack into computer systems in search of data.

Code Security: SAST, Shift-Left, DevSecOps and Beyond

One of the most important elements in DevSecOps revolves around a project’s branching strategy. In addition to the main branch, every developer uses their own separate branch. They develop their code and then merge it back into that main branch. A key requirement for the main branch is to maintain zero unresolved warnings so that it passes all functional testing. Therefore, before a developer on an individual branch can submit their work, it also needs to pass all functional tests. And all static analysis tests need to pass. When a pull request or merge request has unresolved warnings, it is rejected, must be fixed and resubmitted. These include functional test case failures and static analysis warnings. Functional test failures must be fixed. However, the root cause of the failure may be hard to find. A functional test error might say, “Input A should generate output B,” but C comes out instead, but there is no indication as to which piece of code to change. Static analysis, on the other hand, will reveal exactly where there is a memory leak and will provide detailed explanations for each warning. This is one way in which static analysis can help DevSecOps deliver the best and most secure code. Finally, let’s review Lean and shift-left, and see how they are connected.

Quote for the day:

'The mediocre leader tells; The good leader explains; The superior leader demonstrates; and The great leader inspires." -- Buchholz and Roth