Daily Tech Digest - June 18, 2021

Does Cloud Computing Help or Harm the Environment?

Fortunately, getting data centers to rely on clean, renewable energy sources and use that energy more efficiently are far easier tasks than reducing the carbon footprint of the billions of digital storage devices that they've replaced. Here is where economic and environmental interests may overlap. Data center companies have every incentive to maximize the efficiency of their resources and reduce their cost. For that reason alone, the world's biggest data center companies—Amazon, Microsoft, and Google—have all begun implementing plans for their data centers to run on 100% carbon-free electricity. Amazon claims to be the world's largest renewable energy purchaser, consistent with its goals of powering its company with 100% renewables by 2025 and to become carbon net-zero by 2040. Microsoft has pledged to be carbon negative by 2030 and to remove from the atmosphere all the carbon the company has ever emitted since it was founded in 1975. To achieve this, it plans on having all of its data centers running on 100 renewable energy by 2025. And Google had already reached its 100% renewable energy target in 2018, though it did so in part by purchasing offsets to match those parts of its operations that still relied on fossil fuel electricity.


5 Keys to Creating a Zero-Trust Security Foundation

Recent high-profile attacks have disrupted global commerce across the world, bringing home the critical importance of maintaining a robust IT security program. The recent ransomware attacks on the Colonial Pipeline, the largest petroleum pipeline in the US, and meat supplier JBS, highlight the cascading, society-disrupting havoc these types of attacks can create. Those concerns increasingly extend to IoT devices, as evidenced by the recent hack of cloud-based security services firm Verkada, where bad actors gained access to 150,000 of the company’s cameras, including inside factories, hospitals, prisons, schools, and even police stations. Vulnerabilities come in many forms and we have known for a long time that the onslaught of IoT devices onto corporate networks is largely unprotected. It’s little wonder then that when the Ponemon Institute surveyed 4,000 security professionals and asked why breaches still happen, the top answer was the increasing attack surface. ... As a networking vendor, connecting people and things is part of Aruba’s core mission. 


4 ways AI can help us enter a new age of cybersecurity

Combine conventional threat intelligence (a list of all known cyberthreats to date) and use machine learning to understand risks. This should result in a better, more efficient system of threat detection and prevention. This can also help to identify any loophole or threat present in the data. In fact, machine learning can also be used to spot any abnormality or potential vulnerability in the midst of “normal” activity and warn users of a threat before it could compromise essential data. With the right systems in place, your hackers won't even realize that you know of their presence, so you can take immediate measures to ensure the safety of your digital infrastructure. ... In recent years, cryptocurrencies like Bitcoin and Ethereum have been rising in popularity. These cryptocurrencies are built upon blockchain, an innovative technical solution to store a secure, decentralized record of transactions. Blockchain can be used to enable medical records and help in security management by identifying criminal identity loopholes in the system. With blockchain technology, verification keys wouldn't be required anymore. If someone tries to hack the data, the system analyzes the whole mass of data chains. 


Moving From Digital Banking To Embedded Experiences

First and most importantly, banks and credit unions must focus on placing the consumer at the center of the organization, with product silos eliminated in favor of teams aligned around the customer journey. According to the research, 64% of the banking sector’s digital masters have “created personae and journey maps to identify and serve customers better.” Beyond that, it will be imperative to create an agility and flexibility in delivery similar to what exists in fintech and bigtech firms. This will most likely require changes in the composition of boards, top leadership and departmental management who can see banking from a new perspective. New operating models will also be required that will include the collaboration with third-party providers. There also needs to be support of open banking APIs that will enable the offering of new products both within and outside financial services. Bottom line, the infrastructure of banking as well as the perspective of banking’s role in the consumer’s life must change. According to Capgemini, 64% of banks are actively working with a wide ecosystem of partners – such as startups, incubators, technology firms, and even competitors – to co-develop solutions. 


Cybersecurity: Five key reminders for compliance teams

Cybersecurity breaches are not always the work of nefarious actors orchestrating a sophisticated hack. Damaging data breaches may be as likely to result from unintentional human error. Even seemingly benign behaviors –– using public Wi-Fi, neglecting to put passwords on computers and mobile devices, and clicking on bad links –– can be all it takes to give cybercriminals the access they need. It does little good to build a digital fortress if there aren’t adequate controls over who gains access, and under what circumstances. ... First, establishing clear SoD helps avoid conflicts that could lead to fraud or other abuse. For large organizations with multiple lines of business, this is particularly important. Investment professionals on a firm’s buy-side, for example, should not have access to the exact same data as those on the sell-side. SoD may also help prevent control failures that can occur when too many people have access to data for which they aren’t necessarily accountable for. By segregating duties (and data access), compliance teams are better positioned to spot weaknesses, while also ensuring that teams and individuals understand exactly what data should be in their purview and what may be off-limits.


The four Fs of employee experience

Ask yourself what it would take for employee experience to be a delight — for example, through gamified training modules or KPIs. We work with a leading technology firm that asked itself this very question and developed its tools for surveying employees accordingly, designing them to be simple and intuitive, satisfying, and not frustrating. The firm used layman’s terms and an appealing tone of voice in written content such as instructions, explanations, and requests. It avoided jargon. And it invested in interesting, stimulating visual interactions rather than ones that were bland and text-heavy — the new experience was less like a spreadsheet assignment to be endured and more of an opportunity to engage. ... Don’t neglect the foundations. Ultimately, employees have a right to expect that “it just works,” whether “it” is their human resources self-service portal, their expense management system, or their system interoperability. It’s also critical that user experience be accessible to all, including employees with any type of disability. 


Amex bets on AI and NLP for customer service

We started this journey [of leveraging AI] long before we applied machine learning to some other more mature use cases, including our fraud models and some credit risk models. And in the past couple years, especially in the past five years or so, we started to see with certainty that deep neural network models started to outperform almost every other machine learning model when it comes to high dimensional data and highly unstructured data. We not only deal with some of the traditional fields, like customer transactions, but also there are tax consequences and volume history data. Neural network models can effectively deal with all of that. ... First, I think it’s really about recognizing patterns. And if you look at certain use cases where you have customer behavior that’s being repeated and you can expedite that behavior, then that tends to be a real sweet spot for machine learning capabilities. The other thing I would add is we take the decision to apply machine learning techniques quite seriously. We have an entire AI governance board that cross-checks all the models that we build for bias and privacy concerns. So even taking the approach of AI, we have to justify to a number of internal teams why it makes sense.


‘Debt’ as a Guide on the Agile Journey: Technical Debt

If network infrastructure is not your specialty, you might question how much requirements for connectivity could really change over 10 years? Does the Network Team really need to develop a completely new solution and live the DevOps dream? The answer to that is a resounding yes! Today’s (not to mention tomorrow’s) requirements for security features and performance are significantly different from 10 years ago; the network infrastructure is key in the cyber security area of protecting vital business processes and applications by controlling data traffic, and the network must support the vastly increasing amount of data traffic that is the result of new streaming and IoT services, for instance. The Network Team was not able to deliver to these expectations with the legacy technology that we were fighting to operate and maintain, and thus, the business was impacted. Internally, the Network Team themselves were also impacted. They felt the heat from several CXOs who were frustrated that they couldn't satisfactorily support top priorities such as the cyber security agenda.


Deep reinforcement learning will transform manufacturing as we know it

For many large systems, the only possible way to find the best action path is with simulation. In those situations, you must create a digital model of the physical system you want to understand in order to generate the data reinforcement learning needs. These models are called, alternately, digital twins, simulations and reinforcement-learning environments. They all essentially mean the same thing in manufacturing and supply chain applications. Recreating any physical system requires domain experts who understand how the system works. This can be a problem for systems as small as a single fulfillment center for the simple reason that the people who built those systems may have left or died, and their successors have learned how to operate but not reconstruct them. Many simulation software tools offer low-code interfaces that enable domain experts to create digital models of those physical systems. This is important, because domain expertise and software engineering skills often cannot be found in the same person.


Multicluster Management with Kubernetes and Istio

Do you have multiple Kubernetes clusters and a service mesh? Do your virtual machines and services in a Kubernetes cluster need to interact? This article will take you through the process and considerations of building a hybrid cloud using Kubernetes and an Istio Service Mesh. Together, Kubernetes and Istio can be used to bring hybrid workloads into a mesh and achieve interoperability for multicluster. But another layer of infrastructure — a management plane — is helpful for managing multicluster or multimesh deployments. ... Using Kubernetes enables rapid deployment of a distributed environment that enables cloud interoperability and unifies the control plane on the cloud. It also provides resource objects, such as Service, Ingress and Gateway, to handle application traffic. The Kubernetes API Server communicates with the kube-proxy component on each node in the cluster, creates iptables rules for the node, and forwards requests to other pods. Assuming that a client now wants to access a service in Kubernetes, the request is first sent to the Ingress/Gateway, then forwarded to the backend service 



Quote for the day:

"A good leader can't get too far ahead of his followers." -- Franklin D. Roosevelt

Daily Tech Digest - June 17, 2021

A Deep Dive Into Efinity: Next-Generation Blockchain for NFTs

Efinity will be a hub for all fungible and non-fungible tokens, meant to serve and benefit all participants in the digital asset space—collectors, creators, artists, decentralized app (dApp) developers, enterprises, sports teams, and more. The Enjin ecosystem is robust, with a wide range of projects and developers using our products to create, distribute, and integrate NFTs with their projects. Over 1.14 billion digital assets have already been created with Enjin. All of these tokens can benefit from the cost efficiency, speed, and next-generation features of Efinity—and that’s only the existing Enjin ecosystem. We believe Efinity will do for the wider NFT ecosystem what ERC-1155 did for Ethereum: make NFTs even more accessible to everyone. We expect end-users to create NFTs with the same ease and as intuitively as they take a picture with a smartphone today; trade NFTs faster than they can purchase something from Amazon; and most importantly, use those tokens in a myriad of futuristic ways. It’s up to companies and developers across the world to give that next-gen utility to NFTs, and truly unlock their power to the masses.


A Look at a Zero Trust Strategy for the Remote Workforce

If you are new to the security world, it is fair to ask yourself, “Isn’t access to data and systems always conditional? Isn’t it always granted to someone who has access to the credentials (ID and password)?” True enough, but in totality, the approach to managing access encompasses a broader spectrum of privacy policies. These policies include a mix of different strategies that can be applied based on an organization’s security vulnerabilities. Conditional access is one such security management practice that many companies have opted for. The shift to smart mobile devices and cloud has made it necessary to ensure conditional access. Further, this has become imperative, as remote working is here to stay. With several companies making announcements about permanent work-from-home policies, a zero-trust model of conditional access has become crucial. IT security teams must be prepared to both validate and verify devices and users with a set of automated policies. IT teams could easily monitor incoming IP addresses as the first step for identifying credentials. However, growing use of VPNs coupled within a remote working environment is making that impossible, thus rendering organizations more vulnerable to threats.


Most firms face second ransomware attack after paying off first

The majority of businesses that choose to pay to regain access to their encrypted systems experience a subsequent ransomware attack. And almost half of those that pay up say some or all their data retrieved were corrupted. Some 80% of organisations that paid ransom demands experienced a second attack, of which 46% believed the subsequent ransomware to be caused by the same hackers. Amongst those that paid to regain access to their systems, 46% said at least some of their data was corrupted, according to a Cybereason survey released Wednesday. Conducted by Censuswide, the study polled 1,263 security professionals in seven markets worldwide, including 100 in Singapore, as well as respondents in Germany, France, the US, and UK. Globally, 51% retrieved their encrypted systems without any data loss, while 3% said they did not regain access to any encrypted data. The report revealed that one particular organisation reportedly paid up a ransomware amount in the millions of dollars, only to be targeted for a second attack by the same attackers within a fortnight.


Top 10 Security Risks in Web Applications

Injection or SQL injection is a type of security attack in which the malicious attacker inserts or injects a query via input data (as simple as via filling a form on the website) from the client-side to the server. If it is successful, the attacker can read data from the database, add new data, update data, delete some data present in the database, issue administrator commands to carry out privileged database tasks, or even issue commands to the operating system in some cases. ... It is a case where the authentication system of the web application is broken and can result in a series of security threats. This is possible if the adversary carries out a brute force attack to disguise itself as a user, permitting the users to use weak passwords that are either dictionary words or common passwords like “12345678”, “password” etc. This is so common because shockingly 59% of the people use the same passwords on all websites they use. Moreover, 90% of the passwords can be cracked in close to 6 hours! Therefore, it is important to permit users to use strong passwords with a combination of alphanumeric and special characters. This is also possible due to credential stuffing, URL rewriting, or not rotating session IDs.


A Google AI Designed a Computer Chip as Well as a Human Engineer—But Much Faster

Human designers thought “there was no way that this is going to be high quality. They almost didn’t want to evaluate them,” said Goldie. But the team pushed the project from theory to practice. In January, Google integrated some AI-designed elements into their next-generation AI processors. While specifics are being kept under wraps, the solutions were intriguing enough for millions of copies to be physically manufactured. The team plans to release its code for the broader community to further optimize—and understand—the machine’s brain for chip design. What seems like magic today could provide insights into even better floorplan designs, extending the gradually-slowing (or dying) Moore’s Law to further bolster our computational hardware. Even tiny improvements in speed or power consumption in computing could make a massive difference. “We can…expect the semiconductor industry to redouble its interest in replicating the authors’ work, and to pursue a host of similar applications throughout the chip-design process,” said Kahng.

Jensen Huang On Metaverse, Proof Of Stake And Ethereum

For a long time now, Proof of stake has been baffling people interested in crypto and its application in various platforms like Twitter and Project Bluesky. Jensen’s views on the matter have also been favourable to the concept that might replace proof of work in blockchain shortly. He said that the demand for Ethereum had reached such a level that it would be nice to have another method of confirming transactions. “Ethereum has established itself. It now has an opportunity to implement a second generation that carries on that platform approach and all of her services that are built on top of it, he added” Jensen also explained that the reason behind the development of Nvidia’s CMP was the expectation that a lot of Ethereum coins will be mined. CMP has enough functionality that it can be used for crypto mining. ... Addressing the question of how long the chip shortage will last, Jensen said that demand has been growing up consistently, and Nvidia particularly has had pent-up demand since it had reset and reinvested computer graphics, a driving factor in skyrocketing demand. 


Prioritizing and Microservices

Microservices frequently need to communicate with one another in order to accomplish their tasks. One obvious way for them to do so is via direct, synchronous calls using HTTP or gRPC. However, using such calls introduces dependencies between the two services involved, and reduces the availability of the calling service (because when the destination service is unavailable, the calling service typically becomes unavailable as well). This relationship is described by the CAP theorem (and PACELC) which I've described previously. .... If any response is necessary, the processing service publish an event, which the initiating service can subscribe to and consume. ... The issue with this approach is that the prioritization is only applied at the entrance to the system, and is not enforced within it. This is exacerbated by the fact that the report orchestrator has no FIFO expectation and in fact can begin work on an arbitrary number of commands at the same time, potentially resulting in a very large amount of work in process (WIP). We can use Little's Law to understand how WIP impacts the time it takes for requests to move through a system, which can impact high priority SLAs. Constraining total WIP on the system, or at least on the orchestrator, would mitigate the issue.


Cloud Outage Fallout: Should You Brace for Future Disruption?

The outage also put other topics in focus that might not have received consistent attention in the past. Though DevOps is frequently talked about in enterprise development circles, Bates questions to what degree it is being implemented. “If we can truly get to a DevOps world, securing development and operations, it’s going to help a lot,” he says. “We talk very glibly about DevOps, but we don’t ask the really hard questions about if anyone is really doing this.” Taken into context of sudden moves to the cloud in response to the pandemic, the Fastly outage was a relatively quick blip, says Drew Firment, senior vice president of transformation with cloud training platform A Cloud Guru. The incident does offer a moment for reflection for organizations. “Folks are looking at their cloud architecture,” he says. “Architecture equals operations.” As organizations build in the cloud, decisions on cloud providers and services can have a dramatic effect on resiliency, Firment says. “That’s why cloud architects are in such demand, especially if they can take those things into consideration.”


Proactive and reactive: symbiotic sides of the same AI coin

Artificial Intelligence (AI) as a phrase is bandied about to refer to any number of technologies currently in use. And it’s not that this is wrong per se, but it’s like referring to rustic Italian cuisine and molecular gastronomy simply as “food”. The world would be a poorer place without either, but they serve entirely separate purposes for the palate. According to Gartner, “By 2025, proactive (outbound) customer engagement interactions will outnumber reactive (inbound) customer engagement interactions.” The distinction being made here is the AI as it is designed for use in the reactive realm (think chatbots) vs. the use case of proactive engagement. While the core technology that underlies both may be similar, and both have specific use cases, proactive engagement is a more focused utilisation. If you have ever attempted to play the game ‘Twenty Questions’, you have had an inkling of what a chatbot is attempting to do, i.e., asking a series of questions of an individual in an effort to get at an answer. Except in the case of chatbots, you are usually playing the game with an irate customer in a negative frame of mind. 


Are your cryptographic keys truly safe? Root of Trust redefined for the cloud era

When you are working with cloud infrastructure, the hardware (and in many cases also the software) is not under your control. This is also true of cloud-based HSMs provided by cloud service providers (CSPs). You need to look no further than the CLOUD Act to realize that your CSPs have immediate access to your keys and data. This is not theoretical access – this report published by Amazon details the law enforcement data requests with which Amazon complied over a six month period in 2020. It’s not a big jump to imagine an insider at your CSP exploiting this ability to expose your keys. While CSPs make genuine efforts to secure their hardware under the Shared Responsibility Model, the nature of the beast is that using third-party infrastructure also leaves you vulnerable to supply chain attacks. Consider the attack on SolarWinds and imagine the repercussions of your CSP – and by extension you – falling victim to such a large-scale supply chain attack. It’s clear that the implementation of Root of Trust as a purely hardware solution deployed in a single location needs to move with the times.



Quote for the day:

"No person can be a great leader unless he takes genuine joy in the successes of those under him. -- W. A. Nance

Daily Tech Digest - June 16, 2021

Cloud investments slow to deliver ‘substantial’ benefit for many companies

“After a decade of cloud experience, organizations are facing a talent shortage for all cloud-related skills,” Forrester said in a March 2020 report. “Although legacy skill sets translate well to new cloud technologies, the cultural leap to evaluate, select, and operate for productivity, system-level efficiency, and workload-specific problem solving is proving to be a challenge. Enterprise attempts to hire and train talent are constantly plagued with poaching by the cloud vendors themselves.” Other barriers stand in the way of successful cloud technology implementations. According to PwC, trust-related considerations like a cloud’s impact on customer commitments or regulatory compliance are considered either too late or not all. Only 17% of risk management leaders responding to the firm’s survey said they’re involved at the start of cloud projects. And 55% of chief human resource officers see changes to processes and ways of working as significant issues when it comes to the cloud. ... It comes as no surprise that members of the C-Suite are more involved than before in cloud adoption efforts, given the amount of capital at stake.


Why a Serverless Data API Might be Your Next Database

App development stacks have been improving so rapidly and effectively that today there are a number of easy, straightforward paths to push code to production, on the cloud platform of your choice. But what use are applications without the data that users interact with? Persistent data is such an indispensable piece of the IT puzzle that it’s perhaps the reason the other pieces even exist. Enter cloud and internet scale requirements, essentially mandating that back-end services must be independently scalable / modular subsystems to succeed. Traditionally, this requirement has been difficult in the extreme for stateful systems. No doubt, database as-a-service (DBaaS) has made provisioning, operations, and security easier. But as anyone who has tried to run databases on Kubernetes will tell you: auto scaling databases, especially ones that are easy for developers to use, remain out of reach for mere mortals. Serverless data offers enticing benefits, but market offerings are limited. What serverless data can, should, or could do for us isn’t always well understood. And yet, database simplicity for application developers today is increasingly taking a new form: autoscaling database services that are delivered as fluent, industry standard APIs -- APIs that are secure, well documented, easy to use, and always on.


Making Your Life Easier with C# Scripting

if you've installed Visual Studio, then you already have a perfectly good CS-Script environment: CSI.EXE (I found my copy in C:\Users\<user name>\Source\ExchangeControl.WebService\bin\roslyn). You can create a CS-Script command environment by just opening the Developer Command Prompt for Visual Studio and typing CSI. Once the window has re-displayed the command prompt, you can start entering and executing CS-Script. You're not limited to single C# statements with the CSI prompt: Statements that you enter in the CSI environment build on previous statements you've entered. ... Even more useful is the CS-Script REPL window (Read-Evaluate-Print-Loop) that you can open by going to Visual Studio's View | Other Windows menu choice and selecting C# Interactive. In that window you can now enter CS-Script code and just hit the <Enter> key to have it execute. There are a couple of disappointments here, though. It might make sense to try out classes and their members from the interactive window. However, just because the window has opened in Visual Studio while your solution is open, it doesn't mean the window knows anything about the classes defined in the current solution. In fact, the window's default folder isn't even your current solution's folder.


New threat & vulnerability management APIs - create reports, automate, integrate

Customized reports and dashboards enable you to pool the most meaningful data and insights about your organization’s security posture into a more focused view based on what your organization or specific teams and stakeholders need to know and care about most. Custom reports can increase the actionability of information and improve efficiencies across teams, because it reduces the workload of busy security teams and allows them to focus on the most critical vulnerabilities. Before building custom views using tools such as PowerBI and Excel, you can enrich the native datasets provided by Microsoft’s threat and vulnerability management solution with additional data from Microsoft Defender for Endpoint or a third-party tool of your choice. In addition, these reports/dashboards give you an easy way to report key information and trends to top management to track business KPIs and provide meaningful insights on the overall status of the vulnerability management program in your organization. With a custom interface you can show the information that your teams need and nothing more, creating a simpler task view or list of day-to-day work items. 


Combining classical and quantum computing opens door to new discoveries

The research team from IQC in partnership with the University of Innsbruck is the first to propose the measurement-based approach in a feedback loop with a regular computer, inventing a new way to tackle hard computing problems. Their method is resource-efficient and therefore can use small quantum states because they are custom-tailored to specific types of problems. Hybrid computing, where a regular computer's processor and a quantum co-processor are paired into a feedback loop, gives researchers a more robust and flexible approach than trying to use a quantum computer alone. While researchers are currently building hybrid, computers based on quantum gates, Muschik's research team was interested in the quantum computations that could be done without gates. They designed an algorithm in which a hybrid quantum-classical computation is carried out by performing a sequence of measurements on an entangled quantum state. The team's theoretical research is good news for quantum software developers and experimentalists because it provides a new way of thinking about optimization algorithms. 


Critical Factors for Managing Applications and Kubernetes at the Edge

As 5G wireless finds ubiquity, and as more connected devices on the Internet of Things (IoT) begin using wireless communications, data volumes and data rates are also increasing. While these two factors are somewhat independent, together they increase the demand for applications on the edge by orders of magnitude. This demand for speed means that the old model for a central database slowly reacting to application queries from a variety of sources is now being replaced with both applications and data located at the network edge where they can respond quickly to a vast flow of inputs. Containerized, microservice applications that support this flow must be where they can handle it, which means that they, too, must be at the edge. Kubernetes is the industry’s tool of choice for container orchestration, however, when moving containers to the edge, additional Kubernetes management complications appear. Deployment, security, and fleet management processes all become exponentially more complex given the number of clusters that need to be managed is now measured by the hundreds.


Out in IT: A work-in-progress for the LGBTQ+ community

It’s hard to know exactly how underrepresented because to date, the industry hasn’t focused on tracking LGBTQ+ employment, and companies are only now starting to offer self-identification opportunities to get greater transparency into the makeup of their employee base. While identifying gender and race is a common part of the onboarding process at most companies, sexual orientation and gender identity are not, which makes it all the more difficult to gather metrics. Because the decision to share that status is voluntary, LGBTQ+ employees have a different experience than other “visible” minorities, says Jeff Raver (he/him), a top-level IT executive, who openly identifies as gay. “Gay people must choose to share who they are and this affects people in multiple ways,” says Raver, vice president of strategy, growth, and innovation at SAIC. “Since teams may not know they are working with an LGBTQ co-worker, both unintentional and intentional bias becomes a greater challenge. Additionally, the stress of not sharing your authentic self requires substantial energy and can become a huge distraction for LGBTQ persons that choose to remain in the closet.”


SAFe is a marketing framework, not an Agile scaling framework

SAFe tells a story that resonates with the existing worldview of numerous corporations. The SAFe narrative fits snugly with the existing command & control paradigm of many large companies. You can keep on doing what you did before, shuffle some teams, we’ll throw in some fancy new labels and POOF! now you’re Agile. Doesn’t that sound amazing? No hard work necessary, yet you’ll still have the same magnificent results? That’s the promise and appeal of SAFe. The SAFe acronym was carefully picked to seem risk-free. SAFe sells the illusion you can radically change while staying in your comfort zone. As nice as it sounds, deep down we all know it isn’t true. Radical change is never easy. Agile is a new paradigm that requires you to fundamentally change how you work. Most corporations are not up for those kind of drastic changes and that’s perfectly understandable. For them SAFe offers an alluring but ultimately inconsequential alternative. SAFe offers what corporations are familiar with and are able to recognize. That’s exactly why it’s bound to fail. Working in a new paradigm should feel uncomfortable and uncertain until the moment it doesn’t.


How can banks harness automation to its fullest value?

Automation technologies could contribute an additional $US 1 trillion annually in value across the global banking sector – through increased sales, cost reduction and new or unrealized opportunities. But this value is still largely being left on the table. Why? There are well documented challenges with automation, including lack of clear and strategic intent and senior executive support for automation, plus heavily siloed deployment within organisations, resulting in disconnects within and across digital transformation efforts. To be frank, operating models are, per se, neither enable nor ask for strategic use of automation technologies. But a hidden key reason has become increasingly obvious – the failure to grasp the nature and size of the opportunity. If automation technologies can be recombined in new ways, not only can existing opportunities be seized, but new ones can be created, ad infinitum. Prescient banking executives we are researching understand two things: the strategic opportunities offered by intelligent automation; and how automation can drive the twin engines of compound growth and combinatorial innovation. 


3 Ways CIOs Can Enable Innovation Within a Hybrid Workforce

Plan everything intentionally. Design floats luck to the shore; otherwise luck just lies in the offshore mud and waves feebly at the beach. The best virtual gatherings of 2020 were the planned ones -- the games of charades, the holiday meals where people shared recipes and made them independently, the weddings where guests attended, however briefly, when they never could have in person. Fewer meetings can make for better innovation if parts of the process are aligned to the best ways to achieve them. (Workers tell us they spend 8+ hours a week in meetings, on average; help them cut that time by making the meeting time they do spend the most productive it can be.) Make plans that bring people together for the tasks that demand shared presence but also encourage them to share endeavors they might not have been able to foresee. CIO to-do: Show executives the menu of meeting types that they can choose from and model the best behavior in selection by demanding that any meeting have a reason to happen when it does and a reason to include everyone invited. Require participants to set for themselves a role they’ll play in the meeting.



Quote for the day:

"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer

Daily Tech Digest - June 15, 2021

How the public sector can accelerate digital discovery

Firstly, public bodies should focus on outcomes rather than output. By identifying where an immediate impact can be made to address the challenges of legacy technology – rather than trying to fix everything at once – you can empower digital partners and discovery teams to identify issues and make key decisions without blockers from other teams, existing structures or business areas. Removing this red tape will mean decisions and actions will be taken at a pace, delivering greater value and results in the process, rather than creating complicated services that users struggle to navigate. The next focus to enhance digital discovery should be diversity, building and working with project teams that cover a wide range of disciplines and skill sets, as well as ages, races and genders. Increased diversity means that a discovery team benefits from different experiences and frames of reference, helping to avoid conformity and a groupthink mentality, which can result in issues being missed or solutions not being considered as everyone is thinking on the same page. For example, including people from non-digital backgrounds in a discovery team, such as service users, will help to identify problems that otherwise may be missed. 


Delivering the Highest Quality of Experience in a Multi-Cloud World

The global pandemic has accelerated enterprise IT teams’ desire to simplify the management of complex multi-cloud and edge environments and operate them holistically as a single WAN. It is also driving IT requirements for delivering the highest levels of application performance for all their cloud-hosted business applications, from any network in the emerging post-pandemic environment. This shift is intensifying the urgency to transform conventional data center and MPLS-centric and VPN-based networks to a more modern hybrid SD-WAN environment that combines MPLS and internet with secure managed internet-based cloud services. In a hybrid WAN environment, application performance across a WAN can vary considerably from site to site or region to region because of underlying factors such as latency, packet loss and jitter that must be taken into consideration, especially using a mix of MPLS and broadband connectivity services. The Aruba EdgeConnect SD-WAN edge platform, acquired with Silver Peak, supports advanced visibility, routing, control and intent-based policy management for any application – thereby improving the performance and availability of business applications by dynamically routing traffic to virtually any site, automatically adapting to real-time network conditions.


Intelligence gathering: Bringing AI technology into strategic planning

While theories governing corporate strategy have been debated (and sometimes overthrown) over the years, real time strategy focuses on modernizing an aspect that has practically been left untouched: methodology. AI techniques, which include machine learning, can import data from an abundance of sources, identify patterns and trends, and supply insights for decision-makers. In the process, AI-enabled planning upends traditional processes that depend on (and are affected by) human bias. Too often, the authors point out, current strategic decisions are based on information that is flawed across multiple dimensions (e.g., completeness, accuracy) and end up being unduly influenced by intuition and experience. During the exhaustive process of devising a plan, many assumptions and hypotheses are undeservedly promoted to “facts,” especially if they help dim uncertainty. The result: strategic plans that gain consensus, but emerge with a blandness akin to vision statements—and no mechanism for consistent follow-up. Without alignment among business units as to how each defines success, even companies that have embraced AI can end up stalled on the AI maturity curve, unable to progress beyond early victories in cost reductions and productivity gains.


Unique TTPs link Hades ransomware to new threat group

Researchers claim to have discovered the identity of the operators of Hades ransomware, exposing the distinctive tactics, techniques, and procedures (TTPs) they employ in their attacks. Hades ransomware first appeared in December 2020 following attacks on a number of organizations, but to date there has been limited information regarding the perpetrators. ... The findings are a result of incident response engagements carried out by Secureworks in the first quarter of 2021. “Some third-party reporting attributes Hades to the Hafnium threat group, but CTU research does not support that attribution,” the researchers wrote. “Other reporting attributes Hades to the financially motivated Gold Drake threat group based on similarities to that group’s WastedLocker ransomware. Despite use of similar application programming interface (API) calls, the CryptOne crypter, and some of the same commands, CTU researchers attribute Hades and WastedLocker to two distinct groups as of this publication.” ... “Typically, when we see a variety of playbooks used around a particular ransomware, it points to the ransomware being delivered as ransomware-as-a-service (RaaS) with different pockets of threat actors using their own methods,"Marcelle Lee, senior security researcher, CTU-CIC at Secureworks, tells CSO.


AI: It’s Not Just For the Big FAANG Dogs Anymore

“Previously, building models, building features, was extremely difficult” and typically required a data scientist. “But today, particularly for SMEs, this type of automation tool can help those aspects a lot…Our type of automation is definitely helping them to ramp up the speed of their AI journey.” Fujimaki noted how one of dotData’s smaller customers was able to build AI solutions without a huge investment. The company, Sticky.io, develops a subscription management service that is provided to other businesses as a SaaS offering. It wanted to add a predictive capability to identify payments that were likely to fail. “For them, the biggest barrier was…skill,” Fujimaki tells Datanami. “They are a cloud-native company, so the data is stored in AWS. On the AI side, they didn’t have data scientists, so they needed AutoML functionality.” Sticky.io’s product manager was able to use dotData to comb through their data and identify the right features that would go into the predictive model. Even though he didn’t posses preexisting talents in data science, the pilot was a success, and Sticky.io’s leadership recognized the value that it brought. “The most important skill that [customers] have to have is the input side and output side,” Fujimaki says. 


Application modernization patterns with Apache Kafka, Debezium, and Kubernetes

The very first question is where to start the migration. Here, we can use domain-driven design to help us identify aggregates and the bounded contexts where each represents a potential unit of decomposition and a potential boundary for microservices. Or, we can use the event storming technique created by Antonio Brandolini to gain a shared understanding of the domain model. Other important considerations here would be how these models interact with the database and what work is required for database decomposition. Once we have a list of these factors, the next step is to identify the relationships and dependencies between the bounded contexts to get an idea of the relative difficulty of the extraction. Armed with this information, we can proceed with the next question: Do we want to start with the service that has the least amount of dependencies, for an easy win, or should we start with the most difficult part of the system? A good compromise is to pick a service that is representative of many others and can help us build a good technology foundation. That foundation can then serve as a base for estimating and migrating other modules. 


Understanding AIOps – separating fact from fiction

AIOps is much more than another buzzword or a simple tool to correlate incidents. When implemented properly, AIOps can detect anomalies automatically and help remediate and prevent incidents before they impact end-users and customers. Once anomalies or incidents are detected, it takes a further step and provides structured analysis and detail on what these issues are and what the root cause is. This allows the IT team to understand the problem within minutes, and fix it faster, preserving user experience and avoiding disruptions to the business. This is observability in action. When working with telemetry data, AIOps can pick the right team to alert of issues it detects early, and provide actionable insights so that operations become more efficient and DevOps teams can focus on innovation, rather than spending non-productive time reactively troubleshooting problems. ... There is no real reason why a smaller teams could not use AIOps to differentiate their business and correct operational issues and also decrease human burden. In fact for small teams, AIOps can help to quickly discover issues and decrease pressure on already busy teams who need to eliminate toil to focus on value creation.


Data Scientists Will be Extinct in 10 Years

Data scientists will be extinct in 10 years (give or take), or at least the role title will be. Going forward, the skill set collectively known as data science will be borne by a new generation of data savvy business specialists and subject matter experts who are able to imbue analysis with their deep domain knowledge, irrespective of whether they can code or not. Their titles will reflect their expertise rather than the means by which they demonstrate it, be it compliance specialists, product managers or investment analysts. We don’t need to look back far to find historic precedents. During the advent of the spreadsheet, data entry specialists were highly coveted, but nowadays, as Cole Nussbaumer Knaflic (the author of “Storytelling With Data”) aptly observes, proficiency with Microsoft Office suite is a bare minimum. Before that, the ability to touch type with a typewriter was considered a specialist skill, however with the accessibility of personal computing it has also become assumed. Lastly, for those considering a career in data science or commencing their studies, it may serve you well to constantly refer back to the Venn diagram that you will undoubtedly come across.


Cisco bolts together enterprise and industrial edge with new routers

As organizations accelerate digitization, they need a way to simplify management and security across the network and edge devices. "With our new routing portfolio, customers can have a united architecture across their diverse edge use cases from HQ to remote edges. The same rich functionality and robust security models across your entire business – from campuses and branch offices to substations, remote operating locations, fleets, on-the-go connected assets," wrote Butaney. "Now utilities can securely connect their edge to reduce outages, integrate renewables, and improve grid resiliency. Transportation systems providers can optimize routes for first responders and share real-time location and safety information with travelers. Whatever your business, you can connect all your networks with one common architecture and holistic enterprise-wide security approach," Butaney stated. The modular routers, including the Catalyst IR8100, IR8100 Heavy Duty Series and IR8300 Rugged Series Router, can be customized, and all can have storage or CPUs upgraded in the field.


Quick and Seamless Release Management for Java projects with JReleaser

The original mission of JReleaser is to streamline the release and publishing process of Java binaries in such a way that these binaries may be consumed by platform-specific package managers, that is, provide a ZIP, TAR, or JAR file. JReleaser will figure out the rest, letting you publish binaries to Homebrew, Scoop, Snapcraft, Chocolatey, and more. In other words, JReleaser shortens the distance between your binaries and your consumers by meeting them where they prefer to manage their packages and binaries. Early in JReleaser’s design, it became apparent that splitting the work into several steps that could be invoked, individually or as one single unit, would be a better approach than what GoReleaser offers today. This design choice allows developers to micromanage every step as needed to hook-in JReleaser at a specific point of their release process without having to rewrite everything. For example, you can fire up JReleaser and have it create a Git release (GitHub, GitLab, or Gitea) along with an automatically formatted changelog, or you can tell JReleaser to append assets to an existing Git release that was created by other means, or perhaps you’re only interested in packaging and publishing a package to Homebrew regardless of how the Git release was created.



Quote for the day:

"Whenever you see a successful business, someone once made a courageous decision." -- Peter F. Drucker

Daily Tech Digest - June 14, 2021

How and Why Enterprises Must Tackle Ethical AI

Explainability can also help humans who must work with algorithmic findings that don't seem to make sense at first glance. For instance, Cloudera Fast Forward Labs has a prototype available that predicts churn for telecom providers looking to predict which customers are at risk to drop the service. The machine learning model found that one of the most important factors in whether someone would leave is whether they have a high degree of complaints about the service. But it's not the complainers who are at risk of leaving. Actually, the opposite is true. The complainers are the ones who are planning to stay for one reason or another, so they have a higher stake in the quality of the service being good. That's why they complain. They care about the service improving. The ones without the high stake just leave if when they are dissatisfied. That's important to know if you are a service representative who is empowered to offer incentives to customers at risk for churn. Creating explainability is among several important steps enterprises must embed in their artificial intelligence operations in order to make responsible, ethical AI a part of doing business. A key to making it work is to ensure that these steps are part of the overall AI process.


The Engineer’s Guide to Writing Meaningful Code Comments

In most cases, you aren’t the only person working on the same project or codebase. That means that other people get to read your code and have to understand it. That’s also true for the code comments you leave behind. Developers often write ‘quick and dirty’ comments without much context, leaving other developers clueless about what you’re trying to say. It’s a bad practice that creates only more confusion than clarifies things. So, yes - you should be bothered with writing meaningful code comments to help other developers. A code comment that describes the function, the reasoning behind the function, and its input and output will speed up the learning process of other developers. Especially for junior developers, this information comes in handy when learning the code. On the other hand, code comments lead us to the discussion whether we should write them? There’s a significant group of developers that advocate against writing code comments. The reason being that code should be self-explanatory. If another developer can’t understand the purpose of your code by looking at it, it’s bad code. 


Why quantum computers are a risk to your business today, and what you can do about it

We can’t be sure which quantum-safe algorithms NIST will standardise and because these algorithms are still relatively new, you may not want to completely do away with today’s standards. After all, quantum computers are still too primitive to break current encryption standards, so using today’s methods is still an effective way to protect against current info security threats. Therefore, as we make the transition to quantum-safe security, it’s important to practice ‘crypto-agility’. Crypto-agility is the process of understanding what existing cryptographic measures can be migrated over to quantum-ready solutions. ... This crypto-agile approach will offer greater assurance against both traditional attacks and future threats. This is vital as many devices, systems and applications that rely on encryption for security are now looking to be deployed and are expected to have a lifespan of over 10 years – if these aren’t cryptographically agile enough to deal with a future quantum attack, organisations will leave themselves vulnerable in the future.


Inclusion Has to Be Continuous

A leader’s job is to create an environment where people can be challenged, and also show that they can be wrong. What my manager did afterwards on my first job was he went to his peers and to our whole team and said, "You know, you basically showed that I'm an idiot, I should have asked." He showed vulnerability; humans can make mistakes. By revealing this, he created safety. What I try to do now too, is now that I have the title and the authority, I really encourage my team to question me, because that's the only way a good idea becomes a great idea. I get them maybe too comfortable in challenging me. But that way I can create the safety we need. We need to have these open dialogues and conversations, showing that when someone asks a question, it's respected. It's not thought of as something “stupid”, it's about asking questions and showing immediate action to help. Literally, I had a new bathroom within a week; that demonstrated this was no longer lip service. It's about creating safety for people to speak up and take immediate action, whether you can or can't do it, giving them an immediate response to why. And then it becomes safe to ask for help. When leaders ask for help, it shows we all need help.


The Future of Work is in for a (Gigantic) Change

With the advancement of work-from-home culture and future migration to four weekdays and hybrid working models, there would also be a rethinking needed on the large corporate offices. On one hand, people do not want to be in a crowded office space and would prefer odd-even type models, on the other hand, it’s not as easy to set up work from the home office if you do not have additional bedrooms, dedicated space, childcare options, and a large home overall and not everyone has that. Also, there is a need for most companies where they encourage their own employees to interact more for enhanced output and peer learning and competition. This would bring open many corporate offices to be accessible as co-working spaces to everyone looking for innovation, learning, and collaboration. Highly likely that the co-working industry will be back with a big bang as they make possible the dream of office next door or walk to the office culture. In fact, co-working companies may be the new commercial real estate aggregators as there may be many new collaborations underway between corporate offices and co-working companies, to drive the abundantly existing commercial infrastructure into good use.


Fighting Half-Blind Against Ransomware Won't Work

To tackle the ransomware information-sharing gap, the cybersecurity industry should establish the RIRN, as called for in the Ransomware Task Force report. The RIRN would serve several functions, including the receipt and sharing of incident reports, directing organizations to incident response services, aggregating data, and sharing alerts about ongoing threats. The RIRN should develop standard reporting formats based on existing standards to make automated sharing possible, and it should adopt business processes that avoid double-counting data, protect privacy, and focus on the value proposition to participants. This network should include nonprofits, cybersecurity vendors, insurance providers, incident responders, and government agencies. A functioning RIRN would help close the information gap that inhibits our response to ransomware. We should build such a network based on the lessons learned from past information sharing initiatives, thereby avoiding the usual flaws that undermine such efforts. The cybersecurity industry shouldn't wait for the government to take the lead. We can create the network now and invite governments to join something that already exists.


IoT cloud services: How they stack up against DIY

Beyond their own services, the strategy adopted by the cloud vendors is to build a rich ecosystem of partnerships, marketplaces, development platforms, and APIs, so that they can offer as much flexibility and as many pathways as possible—as long as the data that requires higher-level processing eventually ends up in their cloud, says Dilip Sarangan, senior director of research at Frost & Sullivan. Neil Shah, vice-president at Counterpoint Research, says that the major cloud players are offering fully managed, end-to-end IoT deployments for “maximum value capture.” But they are also covering their bases by offering open interfaces and partnering with other players in response to enterprise concerns about vendor lock-in. This “have it your way” approach makes sense when you consider the vastly different types of IoT scenarios and the different types of data generated by connected cars, smart cities, smart homes, manufacturing, verticals like oil and gas or healthcare, video surveillance, etc. Dubrova adds that the one thing the cloud vendors lack is domain expertise in specific verticals. “Cloud vendor analytics toolsets tend to be very horizontal and limited—that is where partnerships are playing a key differentiation role.”


Cybersecurity Beyond The Enterprise: The Top Tips Everyone Should Know

With an increasingly remote workforce, many companies are allowing employees to use their personal phones or laptops for business purposes, and people are using their work devices for personal use, too. These practices became more common during the pandemic, and they open up the door for cybercriminals to steal both personal and corporate data at the same time. Keeping your devices separate is just good practice. That way, if one device gets infected, you have a backup and you haven’t jeopardized both your personal data and your company's security. It’s worth saying three times because the majority of people aren’t listening. Using strong passwords that are complex and unique to each account is the No. 1 way to prevent cyberattacks. A Google-Harris Poll found that 24% of Americans admit to using the word "password" or "123456" to secure their accounts. A whopping 66% say they reuse the same password across multiple accounts. It’s such a problem that Google announced recently that it will enable two-factor authentication by default, automatically pushing users to take extra security steps.


Global Trends That Will Affect Digital Lending And Payments In India

To the online transactions ecosystem, blockchain has been lauded as a technology that will revolutionise the space. Maximising efficiency with exceptional features that include transparency, traceability and enhanced accessibility. Blockchain will be able to provide a high level of security when it comes to the exchange of money and sensitive information, allowing users to draw off its transparency while lowering operational costs and creating an environment for safe real-time transfers. India is the biggest market for remittances, with over $62 Bn sent to India from abroad in 2016. Yet, according to Foreign Exchange Management Act of 1999 (FEMA) only an authorized person/entity under the legislation may deal in foreign exchange. However, with incorporation of blockchain and smart contracts the use case of international remittance for blockchain technology will prove to be a promising proposition for the Indian market. India is the biggest market for remittances, with over $62 Bn sent to India from abroad in 2016. .


Edge Computing in Plain English With Lori MacVittie

There's a lot of devices and sensors that are monitoring equipment. I don't know if you've ever been on the floor of a foundry of a manufacturing plant that makes, say, toilet paper, because there's a lot of that in my area. There are the machines that have taken over to actually do most of that work are incredible, right? They do almost everything, but they also need constant supervision. Who knew machines need supervision? There are sensors and monitors that are constantly gathering information, data about the temperature, about the operation, how much oil is in this one, does this need lubrication? How long has it been working? All that data has to go somewhere to basically to the edge. There's an application, that's gathering it all, analyzing it and sending out warnings or, "Hey, it's almost time for maintenance", right? Whatever. But it's also a point of alert. If something happens, it can also turn off a machine which, when you have people and machines mixed together, especially if you're cutting things like cardboard or paper, there's a potential for a real harm to be done. So, they have to be able to react and say, "Turn that off now. Stop that. Alert someone”. So, they need to be able to react very quickly. That is not something you want to have disassociated from the actual location.



Quote for the day:

"The problem with being a leader is that you're never sure if you're being followed or chased." -- Claire A. Murray

Daily Tech Digest - June 13, 2021

The race is on for quantum-safe cryptography

Existing encryption systems rely on specific mathematical equations that classical computers aren’t very good at solving — but quantum computers may breeze through them. As a security researcher, Chen is particularly interested in quantum computing’s ability to solve two types of math problems: factoring large numbers and solving discrete logarithms. Pretty much all internet security relies on this math to encrypt information or authenticate users in protocols such as Transport Layer Security. These math problems are simple to perform in one direction, but difficult in reverse, and thus ideal for a cryptographic scheme. “From a classical computer’s point of view, these are hard problems,” says Chen. “However, they are not too hard for quantum computers.” In 1994, the mathematician Peter Shor outlined in a paper how a future quantum computer could solve both the factoring and discrete logarithm problems, but engineers are still struggling to make quantum systems work in practice. While several companies like Google and IBM, along with startups such as IonQ and Xanadu, have built small prototypes, these devices cannot perform consistently, and they have not conclusively completed any useful task beyond what the best conventional computers can achieve.


Lightbend’s Akka Serverless PaaS to Manage Distributed State at Scale

Up to now, serverless technology has not been able to support stateful, high-performance, scalable applications that enterprises are building today, Murdoch said. Examples of such applications include consumer and industrial IoT, factory automation, modern e-commerce, real-time financial services, streaming media, internet-based gaming and SaaS applications. “Stateful approaches to serverless application design will be required to support a wide range of enterprise applications that can’t currently take advantage of it, such as e-commerce, workflows and anything requiring a human action,” said William Fellows, research director for cloud native at 451 Research. “Serverless functions are short-lived and lose any ‘state’ or context information when they execute.” Lightbend, with Akka Serverless, has addressed the challenge of managing distributed state at scale. “The most significant piece of feedback that we’ve been getting from the beta is that one of the key things that we had to do to build this platform was to find a way to be able to make the data be available in memory at runtime automatically, without the developer having to do anything,” Murdoch said


Can We Balance Accuracy and Fairness in Machine Learning?

While challenges like these often sound theoretical, they already affect and shape the work that machine learning engineers and researchers produce. Angela Shi looks at a practical application of this conundrum when she explains the visual representation of bias and variance in bulls-eye diagrams. Taking a few steps back, Federico Bianchi and Dirk Hovy’s article identifies the most pressing issues the authors and their colleagues face in the field of natural learning processing (NLP): “the speed with which models are published and then used in applications can exceed the discovery of their risks and limitations. And as their size grows, it becomes harder to reproduce these models to discover those aspects.” Federico and Dirk’s post stops short of offering concrete solutions—no single paper could—but it underscores the importance of learning, asking the right (and often most difficult) questions, and refusing to accept an untenable status quo. If what inspires you to take action is expanding your knowledge and growing your skill set, we have some great options for you to choose from this week, too.


The secret of making better decisions, faster

While agility might be critical for sporting success, that doesn't mean it's easily achieved. Filippi tells ZDNet he's spent many years building a strong team, with great heads of department who are empowered to make big calls. "Most of the time you trust them to get on with it," he says. "I'm more of an orchestrator – you cannot micromanage a race team because there's just too much going on. The pace and the volume of work being achieved every week is just mind-blowing." Hackland has similar experiences at Williams F1. Employees are empowered to take decisions and their confidence to make those calls in the factory or out on the track is a crucial component of success. "The engineer who's sitting on the pit wall doesn't have to ask the CIO if we should pit," he says. "The decisions that are made all through the organisation don't feed up to one single individual. Everyone is allowed to make decisions up or down the organisation." As well as being empowered to make big calls, Hackland says a no-blame culture is critical to establishing and supporting decentralised decision making in racing teams.


How to avoid the ethical pitfalls of artificial intelligence and machine learning

Disconnects also exist between key functional stakeholders required to make sound holistic judgements around ethics in AI and ML. “There is a gap between the bit that is the data analytics AI, and the bit that is the making of the decision by an organisation. You can have really good technology and AI generating really good outputs that are then used really badly by humans, and as a result, this leads to really poor outcomes,” says Prof. Leonard. “So, you have to look not only at what the technology in the AI is doing, but how that is integrated into the making of the decision by an organisation.” This problem exists in many fields. One field in which it is particularly prevalent is digital advertising. Chief marketing officers, for example, determine marketing strategies that are dependent upon the use of advertising technology – which are in turn managed by a technology team. Separate to this is data privacy which is managed by a different team, and Prof. Leonard says each of these teams don’t speak the same language as each other in order to arrive at a strategically cohesive decision. 


Five types of thinking for a high performing data scientist

As data scientists, the first and foremost skill we need is to think in terms of models. In its most abstract form, a model is any physical, mathematical, or logical representation of an object, property, or process. Let’s say we want to build an aircraft engine that will lift heavy loads. Before we build the complete aircraft engine, we might build a miniature model to test the engine for a variety of properties (e.g., fuel consumption, power) under different conditions (e.g., headwind, impact with objects). Even before we build a miniature model, we might build a 3-D digital model that can predict what will happen to the miniature model built out of different materials. ... Data scientists often approach problems with cross-sectional data at a point in time to make predictions or inferences. Unfortunately, given the constantly changing context around most problems, very few things can be analyzed statically. Static thinking reinforces the ‘one-and-done’ approach to model building that is misleading at best and disastrous at its worst. Even simple recommendation engines and chatbots trained on historical data need to be updated on a regular basis. 


Double Trouble – the Threat of Double Extortion Ransomware

Over the past 12 months, double extortion attacks have become increasingly common as its ‘business model’ has proven effective. The data center giant Equinix was hit by the Netwalker ransomware. The threat actor behind that attack was also responsible for the attack against K-Electric, the largest power supplier in Pakistan, demanding $4.5 million in Bitcoin for decryption keys and to stop the release of stolen data. Other companies known to have suffered such attacks include the French system and software consultancy Sopra Steria; the Japanese game developer Capcom; the Italian liquor company Campari Group; the US military missile contractor Westech; the global aerospace and electronics engineering group ST Engineering; travel management giant CWT, who paid $4.5M in Bitcoin to the Ragnar Locker ransomware operators; business services giant Conduent; even soccer club Manchester United. Research shows that in Q3 2020, nearly half of all ransomware cases included the threat of releasing stolen data, and the average ransom payment was $233,817 – up 30% compared to Q2 2020. And that’s just the average ransom paid.


Evolution of code deployment tools at Mixpanel

Manual deploys worked surprisingly well while we were getting our services up and running. More and more features were added to mix to interact not just with k8s but also other GCP services. To avoid dealing with raw YAML files directly, we moved our k8s configuration management to Jsonnet. Jsonnet allowed us to add templates for commonly used paradigms and reuse them in different deployments. At the same time, we kept adding more k8s clusters. We added more geographically distributed clusters to run the servers handling incoming data to decrease latency perceived by our ingestion API clients. Around the end of 2018, we started evaluating a European Data Residency product. That required us to deploy another full copy of all our services in two zones in the European Union. We were now up to 12 separate clusters, and many of them ran the same code and had similar configurations. While manual deploys worked fine when we ran code in just two zones, it quickly became infeasible to keep 12 separate clusters in sync manually. Across all our teams, we run more than 100 separate services and deployments. 


When physics meets financial networks

Generally, physics and financial systems are not easily associated in people's minds. Yet, principles and techniques originating from physics can be very effective in describing the processes taking place on financial markets. Modeling financial systems as networks can greatly enhance our understanding of phenomena that are relevant not only to researchers in economics and other disciplines, but also to ordinary citizens, public agencies and governments. The theory of Complex Networks represents a powerful framework for studying how shocks propagate in financial systems, identifying early-warning signals of forthcoming crises, and reconstructing hidden linkages in interbank systems. ... Here is where network theory comes into play, by clarifying the interplay between the structure of the network, the heterogeneity of the individual characteristics of financial actors and the dynamics of risk propagation, in particular contagion, i.e. the domino effect by which the instability of some financial institutions can reverberate to other institutions to which they are connected. The associated risk is indeed "systemic", i.e. both produced and faced by the system as a whole, as in collective phenomena studied in physics.


What’s Driving the Surge in Ransomware Attacks?

The trend involves a complex blend of geopolitical and cybersecurity factors, but the underlying reasons for its recent explosion are simple. Ransomware attacks have gotten incredibly easy to execute, and payment methods are now much more friendly to criminals. Meanwhile, businesses are growing increasingly reliant on digital infrastructure and more willing to pay ransoms, thereby increasing the incentive to break in. As the New York Times notes, for years “criminals had to play psychological games to trick people into handing over bank passwords and have the technical know-how to siphon money out of secure personal accounts.” Now, young Russians with a criminal streak and a cash imbalance can simply buy the software and learn the basics on YouTube tutorials, or by getting help from syndicates like DarkSide — who even charge clients a fee to set them up to hack into businesses in exchange for a portion of the proceeds. The breach of the education publisher involving the false pedophile threat was a successful example of such a criminal exchange. Meanwhile, Bitcoin has made it much easier for cybercriminals to collect on their schemes.



Quote for the day:

"To make a decision, all you need is authority. To make a good decision, you also need knowledge, experience, and insight." -- Denise Moreland