Daily Tech Digest - June 22, 2021

What makes a real-time enterprise?

Being a ‘real-time’ enterprise today is typically evaluated under two criteria: the ability to capture, collect and store data as it comes in; and the ability to respond to it at the point of consumption. Analytics solutions that allow for this are highly sought after, as it’s considered a huge competitive differentiator and critical capability in our fast-paced digital world. However, while there’s much buzzword bingo about real-time data, decision-making and insight, the readiness of the enterprise to become real-time is varied due to a lack of understanding in how it practically aligns with their goals, resulting in lost opportunities and wasted resources. ... We find the sudden hurried shift among enterprises to grasp real-time analytics typically starts when organisations examine their data and see they are not making decisions fast enough to affect business outcomes. Many organisations potentially misconstrue the cause of these common analytics problems as a lack of real-time analytics capability, when there are likely several other factors at play preventing them from making decisions efficiently and effectively, such as a long and arduous analysis process, analysis fatigue and human bias resulting in accidental discovery, and a lack of guidance in understanding what the insights mean.


Does Your Cyberattack Plan Include a Crisis Communications Strategy?

During a cyberattack, one of the most overlooked — and consequential — areas for enterprises is implementing an effective crisis communications strategy. Just as you need to shore up the technology, legal, financial, and compliance aspects of your cybersecurity preparation plan, you must also prioritize crisis management and communications But where should you start? Below are five crisis communications tips to form the foundation of your strategy. ... Our media landscape is characterized by a 24/7 news cycle, ubiquitous social media channels, and misinformation powered by algorithmic artificial intelligence (AI) and delivered instantly on a global scale to billions of people. This shows no sign of abating. What does that mean? Time is not on your side. But with an actionable plan in place, you will be much better prepared. ... With your crisis communications framework in place, it is time for action. Picture this: your company is the target of a ransomware attack. And while desperately trying to address the incident, media are beginning to report the incident, citing reports on Twitter. 


How to Retain Your IT Talent

It seems easy to create an open and collaborative work culture, but in IT it can be a special challenge. This is because the nature of IT work is factual and introspective. It's easy to get buried in a project and forget to communicate status to a workmate -- or to be consumed by planning or budgeting as a CIO and forget to “walk the floor” and visit with staff members. Those heading up IT can make a conscious effort to improve open communication and engagement by setting an example of personal engagement with staff themselves. When staff members understand IT’s strategic direction because the CIO has directly communicated it to them, as well as why they are undertaking certain projects, work becomes purposeful. Team members also benefit if they know that support is available when they need it, and when they know that they can freely go to anyone's office, from the CIO on down. The net result is that people are happier at work, and less likely to leave an inclusive work culture. ... From here, training and mentoring plans for developing employee potential should be defined and followed. Career and skills development plans should be targeted for up-and-coming employees and recent hires, and also for longer-term staff who want to cross train and learn something new.


The positive levers of a digital transformation journey

It’s not just processes. People play an equally important role in the transformation exercise. Shifting from a traditional workplace to a digital one involves an overall change in the mindset of the people behind the business. A company’s culture and behaviour determine how well it can adapt to being ‘digital first’. To undertake digital transformation seamlessly, many organisations ensure transparency by communicating their expectations clearly to their employees. This transformation also helps in highlighting skill gaps within the organisation and sheds light on which of these gaps can be filled by AI and automation, allowing for the repurposing of employee intelligence. Rahul Tandon, head, digital transformation at BPCL said, “Many initiatives and developments are bringing in a lot of automation and AI with a clear objective to absolve our field teams of all repetitive transactional activities and focus solely on business development and efficient customer interactions.” This approach, he says, has infused new energy to the field teams. “We hope it will become the preferred choice for all stakeholders and eventually impact our bottom line positively.”


How to rethink risks with new cloud deployments

With microservices, you have hundreds of different functions running separately, each with their own unique purpose and triggered from different events. Each one of these functions requires its own unique authentication protocol, and that leaves room for error. Attackers will look for things like a forgotten resource or redundant code, or open APIs with known security gaps to gain access to the environment. This will then allow the attacker to gain access to a website containing sensitive content or functions, without having to authenticate properly. While the service provider will handle much of the password management and recovery workflows, it is up to the customers to make sure that the resources themselves are properly configured. However, things get more complicated when functionality is not triggered from an end-user request, but rather during the application flow, in such a way as to bypass the authentication schema. To address this issue, it is important to have continuous monitoring of your application, including the application flow, so you can identify application triggers. From there, you will want to create and categorize alerts for when resources fail to include the appropriate permissions, have redundant permissions, or the triggered behavior is anomalous or non-compliant.


How Containers Simplify DevOps Workflows and CI/CD Pipelines

DevOps has created a way to automate processes to build, test and code faster and more reliably. Continuous integration/continuous delivery (CI/CD) isn’t a novel concept, but tools like Jenkins have done much to define what a CI/CD pipeline should look like. While DevOps represents a cultural change in the organization, CI/CD is the core engine that drives the success of DevOps. With CI, teams must implement smaller changes more often, but they check the code with the version control repositories. Therefore, there is a lot more consistency in the building, packing and testing of apps, leading to better collaboration and software quality. CD begins where CI ends. Since teams work on several environments (prod, dev, test, etc.), the role of CD is to automate code deployment to these environments and execute service calls to databases and servers. The CI/CD concept isn’t entirely new, but it’s only now that we have the right tools to fully reap the benefits of CI/CD. Containers make it extremely easy to implement a CI/CD pipeline and enable a much more collaborative culture.


Automation Is a Game Changer, Not a Job Killer

While many businesses embrace the positives of digitization, employees approach these changes with far less enthusiasm. Words like “automation” and “digitization” are loaded with baggage, invoking negative associations of job loss. Employees are quick to assume the worst, fearing they’ll be left behind or eliminated. But is that fear warranted? Not so, according to BDO’s recent survey of middle market executives. The majority of companies are adding new digital enablement projects, with 34% planning to increase headcount and 42% comprehensively re-imagining job roles. Only 22% expect the use of automation to have a negative impact on headcount. In most cases, jobs are changing and evolving, requiring employees to work alongside new technologies, develop new skill sets and integrate automation into their daily work lives. But for these digital initiatives to succeed, organizations need to secure employee buy-in. Otherwise, initiatives will fall well short of reaching maximum ROI. So, how can CIOs and IT leaders change resistance into adoption and dispel unwarranted fears among the workforce?


Bugs in NVIDIA’s Jetson Chipset Opens Door to DoS Attacks, Data Theft

The most severe bug, tracked as CVE‑2021‑34372, opens the Jetson framework to a buffer-overflow attack by an adversary. According to the NVIDIA security bulletin, the attacker would need network access to a system to carry out an attack, but the company warned the vulnerability is not complex to exploit and that an adversary with little to low access rights could launch it. It added that an attack could give an adversary persistent access to components – other than the NVIDIA chipset targeted – and allow a hacker to manipulate and or sabotage a targeted system. “[The Jetson] driver contains a vulnerability in the NVIDIA OTE protocol message parsing code where an integer overflow in a malloc() size calculation leads to a buffer overflow on the heap, which might result in information disclosure, escalation of privileges and denial of service (DoS),” according to the security bulletin, posted on Friday. Oblivious transfer extensions (OTE) are low-level cryptographic algorithms used by Jetson chipsets to process private-set-intersection protocols used to secure data as the chip processes data.


How can technology design be made more inclusive?

With an increasing reliance on screens to communicate, organisations should also look to ensure that product design addresses how the software facilitates this, and make adjustments where necessary. “Brands must consider all forms of disabilities, such as vision and hearing impairments, as well as conditions like autism, at the very beginning of the design process,” said Paul Clark, senior vice-president and EMEA managing director at Poly. “At Poly, we’ve spent a lot of time making our solutions more accessible. For example, one of our customer’s employees is highly motivated to contribute but has Duchenne Muscular Dystrophy and was self-conscious about the loud, high-pitched noises that his ventilator made during calls. Poly’s NoiseBlock AI technology has been built into all of our headsets and video bars to minimise non-human sounds. Our personal video bar was able to tell that the ventilator noises were not speech and blocked them out. “Simple solutions like raised volume buttons enable the user to recognise controls by touch instead of sight. Brands should also consider ease of use and comfort for people who wear headdress, for example.


Driving network transformation with unified communications

As with most digital processes, cybersecurity remains a primary concern for businesses. With the increased use of UC platforms, such as Microsoft Teams, new security challenges are emerging. And quite often these vulnerabilities come from actions that we do not think twice about. Video recordings, for example, often contain sensitive and confidential information that could prove detrimental if discovered outside of the company. Yet, these recordings are typically stored in a server, or downloaded onto a desktop without much consideration. In addition to threats against sensitive content and data, real time collaboration can cause security weaknesses. With the right tools, criminals could acquire the necessary link to access private conferences and documents on a UC platform. Whether to simply eavesdrop or cause disruption, this breach could result in a number of consequences, both in the short and long term. Again, these calls and documents may contain confidential details which could be exploited by criminals if leaked. Disruptions to conferences will not only cause frustrations at the time, but also potentially damage the reputation of organizations.



Quote for the day:

"Keep your fears to yourself, but share your courage with others." -- Robert Louis Stevenson

Daily Tech Digest - June 21, 2021

Enterprises Face Growing Technical Debt

There is not one single factor causing technical debt. Many issues within an organization contribute to the problem, including reliance on stale technology, pressure to deliver in the short term, constant change, developer churn and incorrect architectural decisions. The top cause of technical debt is too many development languages and frameworks, cited by 52% of respondents as a big or critical problem. Legacy technology can weigh an IT department down. But, it’s not necessarily only old tech getting in the way—it could be that IT is supporting too many competing agendas.The second top cause is a high turnover rate within developer teams. In today’s competitive climate, quality engineers are in short supply, and hiring can be challenging. It is thus difficult to attract and nourish steady engineering talent. If developers frequently leave for greener pastures, especially before documenting their procedures, best practices can easily be lost and efficient use of technology is stunted. The study found that other common causes include accepting known defects to meet deadlines, using outdated programming languages and frameworks and dealing with challenges in serving new markets or segments. 


No code software — the most effective path to closing the IT skills gap

The tech sector has long been governed by a certain subset of society and has lacked diversity. According to Diversity in Tech, 15% of the tech workforce are from BAME backgrounds and gender diversity is at 19%, compared to 49% for all other jobs within the UK. Considering the tech industry is growing almost three times as rapidly as the rest of the UK economy, tech and software development is a lucratively paid and in demand industry for those with the skills. However, there is no doubt it’s exclusionary. While this is a recognised issue many are keen to rectify, movement towards change is slow on the uptake. Socioeconomic dynamics mean privileged groups prevail. Change must happen at grassroots level. If children don’t have access to devices at home, attend schools with archaic software and hardware, or aren’t equipped with a support mechanism or role models, they will find themselves on the back foot for a career in tech. Roles such as software development take time to train and prepare for, meaning they can be hard to break into without background experience. Also, the lack of gender diverse and BAME role models within the tech industry perpetuates this imbalance.


Google’s health care data-sharing partnership is a problem

Privacy concerns are not just related to the fact that stolen data could potentially harm patients and consumers, however. They are also tied to the simple reality that individuals feel as though they have no say in how their personal data is acquired, stored, and used by entities with which they have not meaningfully consented to share their information. According to the Pew Research Foundation, more than half of Americans have no clear understanding of how their data is used once it has been collected, and some 80% are concerned about how much of their data advertisers and other social media companies have collected.  ... The legitimate concerns of consumers combined with a massive and growing amount of data theft make agreements like the one between Google and HCA unwise, despite potential benefits. While the data that Google will have access to will be anonymized and secured through Google’s Cloud infrastructure, it will be stored without the consent of patients, whose deeply personal information is in question. This is because privacy laws in the United States allow hospitals to share patient information with contractors and researchers even when patients have not consented.


What Is A Convolutional Layer?

Most of the classification tasks are based on images and videos. We have seen that to perform classification tasks on images and videos; the convolutional layer plays a key role. “In mathematics, convolution is a mathematical operation of two functions such that it produces a third function that expresses how another function modifies the shape of one function.” If you try to apply the above definition, the convolution in CNN denotes the operation performed on two images which can be represented as matrices are multiplied to give an output that is used to extract features from an image. Convolution is the simple application of a filter to an input image that results in activation, and repeated application of the same filter throughout the image results in a map of activation called feature map, indicating location and strength of detected feature in an input image. .... The CNN is a special type of neural network model designed to work on images data that can be one-dimensional, two-dimensional, and sometimes three-dimensional. Their application ranges from image and video recognition, image classification, medical image analysis, computer vision and natural language processing.


Krustlet Brings WebAssembly to Kubernetes with a Rust-Based Kubelet

By enabling WebAssembly, Krustlet offers increased density, faster startup and shutdown times, smaller network and storage footprints, and all of these are features that not only support microservices but also operation on the edge and in IoT environments. In addition, WebAssembly also offers the ability to run on multiple architectures without being recompiled, has a security model that distrusts the guest by default, and can be executed by an interpreter and streamed in, meaning it can be run on the smallest of devices. “Krustlet, potentially combined with things like SUSE/Rancher’s k3s, can make inroads into IoT by providing a small-footprint extension to a Kubernetes cluster. This points to a sea change occurring in Kubernetes. When some folks at Google first wrote Kubernetes, they were thinking about clusters in the data center. But why think only in terms of the data center?” asks Butcher. “Imagine a world where the pod could be dynamically moved as close to the user as possible — down to a thermostat or a gaming console or a home router. And then, as the person left home, that app could ‘leave’ with them, hopping to the next closest place — yet still within the same cluster. Certainly, that’s tomorrow’s world, but Krustlet is a step toward realizing it.”


Top 5 Cloud Security Challenges Teams Face In 2021

Unfortunately, many teams don’t think about security, and sometimes even overall governance, until it’s too late. Whether they don’t have the budget, think they don’t yet have the scale, or it’s just not top of mind, procrastinating on cloud security can expose an organization to breaches, non-compliance, and other high-risk issues. On the flip side, organizations might have initially taken too heavy-handed of an approach and implemented such strict controls that it prevents them from fully realizing the promise of cloud and DevOps in the future. Thinking about cloud security should happen early, which includes implementing not just the right tools, but also the right processes and people. And it’s never too early to start, because security needs to be woven into your process from the beginning. ... Organizations wanting to keep on top of their cloud security need to prioritize constant education and upskilling, not just around traditional security applied to the cloud but also around industry best practices and cloud fundamentals, too. Identify team members willing to go deeper and pair them with industry experts within the organization, or take advantage of free educational tools from the major cloud providers to keep your team’s knowledge base wide and ever-evolving.


Embrace integrations and automation as you build a security program

In enterprise systems, automation refers to the ability to take a human operated task and reduce it to a data model, then create a script of code for repeatability. Compliance has typically been a labor-intensive practice. When considering the variety and amount of human labor required to meet compliance objectives, the concept of automation often cannot be broadly applied. Audit evidence collection, via an integration, lends itself well to an automated solution. This form of automation can also ensure the timeliness of evidence collection activity. However, this represents only a tiny percentage of the labor required to pass an audit. All organizations can realize benefits from automated compliance concepts by considering which tasks would traditionally require a consultant. Is that task repeatable across consultants? For example, performing an annual risk assessment. Another example is mapping exercises between an organization’s cybersecurity policies and controls against a common standard such as ISO 27001 or SOC 2. People are still required to ensure that the quality of these tasks are acceptable. 


6 Steps Companies Can Take to Strengthen Their Cyber Strategy

With more than a year of remote work for hundreds of thousands of people, many companies historically known for having on-premise based infrastructures are now shifting to multi-cloud strategies. Multi-cloud strategies are valuable because they provide the best possible cloud service for each workload. Today, our cyber security group is partnering with our digital transformation team to enable multi-cloud adoption in a way that advances and streamlines our specific business operations. Cyber leaders should develop risk controls upfront when ushering in multi-cloud strategies so that they don’t hinder the pace of adoption, while also protecting the company’s assets and data. ... Biometrics are a significant game-changer in cyber protection. It’s much harder for a threat actor to break into a system designed on behavioral attributes -- like how quickly people type, how they move their mouse, or what applications they have open -- than a system reliant on static passwords. In fact, we’re working with our data science team to pilot our own data models, leveraging new technologies available in the industry to replace passwords internally over time.


Why big data doesn’t exist — it’s all about the value

Over the past decade there has been an abundance of cases where well-known brands, which typically sit on a mammoth amount of historic data, have collapsed due to not handling it effectively. Companies including retailer Toys “R” Us, book chain Borders, and more recently, department store Debenhams, failed to optimise operations quickly enough to stay relevant in a highly competitive digital environment. Had they responded to what their data analysis was telling them, the outcome of these businesses could have been different. Adopting technology that can process and manage data as well as provide visualisations about what is happening within the organisation in real time can deliver greater insight into everything from product materials and production rates to customer shopping habits and market trends. By knowing what’s working and what’s not, businesses can make decisions based on the evidence the data shows, rather than relying on ‘gut instinct’. The pandemic is an excellent example of how the valuable data over big data can be used to drive decisions, as many businesses were forced to accelerate their digital strategies to remain viable. Management consultancy Mckinsey reports that the crisis brought about years of change to the way all companies and sectors do business


Leveraging Small Teams to Scale Agility - a Red Hat Case Study

Doing Agile does not always mean being Agile. The starting state of this group demonstrated this in the way they were working. They’d had training and had wound up with a rotating cast of characters in scrum teams with 10+ boards. After I arrived, we did another training cycle on Agile. This occurred after the team had committed to doing Agile and helped everyone to acquire the tools for success. Even with their previous challenges, like many teams new to Agile they got excited. Knowledge is power. But even helping move the team members from novice to amateur still left them struggling with concepts like capacity. They, like many teams, struggled to understand what their team capacity level was. They tended to overcommit the volume of work to be completed in each Sprint. This is where a Scrum master can support by helping guide a team to maturity as they learn to deliver value and be responsible for it as a team. It would have been impossible for me to do my work if my manager and the team didn’t trust me. I started with the trust given to me by my manager and the trust of the functional managers as a platform to build on.



Quote for the day:

"If you don't understand people, you don't understand business." -- Simon Sinek

Daily Tech Digest - June 20, 2021

The Reality Behind The AI Illusion

So far, AI has shown some impressive results in narrow application areas only, like chess-playing computers beating world chess champions and supercomputers beating human Jeopardy champions. However, these are computers programmed to solve one specific problem and cannot interpret more complex and multilayered challenges beyond the given task. This is exactly what Moravec's paradox states; though it may be easy to get computers to beat human chess champions, it may be difficult to give them the skills of a toddler when it comes to perception and mobility. While AI has not reached human performance, it brings valuable solutions to many real-world problems quickly and effectively. From enhanced healthcare, innovations in banking and improved environmental protection to self-driving vehicles, automated transportation, smart homes and chatbots, AI can offer simpler and more intelligent ways of accomplishing many of our daily tasks. But how far can AI go? Will it ever be able to function autonomously and mimic cognitive human actions? We cannot envision how AI will end up evolving in the far-off future, but at this point, humans remain smarter than any type of AI.


Mastering the Data Monetization Roadmap

The Data Monetization Roadmap provides both a benchmark and a guide to help organizations with their data monetization journey. To successfully navigate the roadmap, organizations must be prepared to traverse two critical inflection points: Inflection Point #1 is where organizations transition from data as a cost to be minimized, to data as an economic asset to be monetized; the “Prove and Expand Value” inflection point; Inflection Point #2 is where organizations master the economics of data and analytics by creating composable, reusable, and continuously refining digital assets that can scale the organization’s data monetization capabilities; the “Scale Value” inflection point. Carefully navigate these two inflection points enables organizations to fully exploit the game-changing economic characteristics of data and analytics assets – assets that never deplete, never wear out, can be used across an unlimited number of use cases at zero marginal cost, and can continuously-learn, adapt, and refine, resulting in assets that actually appreciate in value the more that they are used.


Will AI Make Interpreters and Sign Language Obsolete?

One of Google’s newest ASR NLPs is seeking to change the way we interact with others around us, broadening the scope of where — and with whom — we can communicate. The Google Interpreter Mode uses ASR to identify what you are saying, and spits out an exact translation into another language, effectively creating a conversation between foreign individuals and knocking down language barriers. Similar instant-translate tech has also been used by SayHi, which allows users to control how quickly or slowly the translation is spoken. There are still a few issues in the ASR system. Often called the AI accent gap, machines sometimes have difficulty understanding individuals with strong accents or dialects. Right now, this is being tackled on a case-by-case basis: scientists tend to use a “single accent” model, in which different algorithms are designed for different dialects or accents. For example, some companies have been experimenting with using separate ASR systems for recognizing Mexican dialects of Spanish versus Spanish dialects of Spanish. Ultimately, many of these ASR systems reflect a degree of implicit bias. In the United States, African-American Vernacular English ...


Bad cybersecurity behaviors plaguing the remote workforce

Over one quarter of employees admit they made cybersecurity mistakes — some of which compromised company security — while working from home that they say no one will ever know about. 27% say they failed to report cybersecurity mistakes because they feared facing disciplinary action or further required security training. In addition, just half of employees say they always report to IT when they receive or click on a phishing email. ... As lockdown restrictions are lifted, six in 10 IT leaders think the return to business travel will pose greater cybersecurity challenges and risks for their company. These risks could include a rise in phishing attacks whereby threat actors impersonate airlines, booking operators, hotels or even senior executives supposedly on a business trip. There is also the risk that employees accidentally leave devices on public transport or expose company data in public places. ... As cybersecurity will be mission-critical in the new work environment, it’s encouraging that 67% of surveyed IT decision makers report that they have a seat at the table when it comes to office reopening plans in their organizations.


Microsoft's new security tool will discover firmware vulnerabilities

Today, ReFirm needs you to provide the firmware files, but Microsoft plans to create a database of device information, Weston says. "You plug in CyberX and it discovers the devices, it monitors them and it asks ReFirm 'do you know anything about IoT device X or Y'. Hopefully we've pre-scanned most of those devices and we can propagate the information -- and for anything we don't have, there's the drag-and-drop interface to do a custom analysis." Having that visibility of what's on your network and whether it's safe to have on your network is a good first step. The Azure Device Updates service can already push IoT firmware updates out through Windows Update. Microsoft's bigger vision is to create a service based on Windows Update that can handle a much wider range of third-party devices, says Weston. "We're going to take Windows Update, which people already at least know and trust on Patch Tuesdays, and we want to push the IoT and edge devices into that model. Microsoft's update system is a pretty known commodity -- just about every government regulator out there looked at it in one form or another -- and so we feel good about being able to move customers towards it."


Deep Learning, XGBoost, Or Both: What Works Best For Tabular Data?

Today, XGBoost has grown into a production-quality software that can process huge swathes of data in a cluster. In the last few years, XGBoost has added multiple major features, such as support for NVIDIA GPUs as a hardware accelerator and distributed computing platforms including Apache Spark and Dask. However, there have been several claims recently that deep learning models outperformed XGBoost. To verify this claim, a team at Intel published a survey on how well deep learning works for tabular data and if XGBoost superiority is justified. The authors explored whether DL models should be a recommended option for tabular data by rigorously comparing the recent works on deep learning models to XGBoost on a variety of datasets. The study showed XGBoost outperformed DL models across a wide range of datasets and the former required less tuning. However, the paper also suggested that an ensemble of the deep models and XGBoost performs better on these datasets than XGBoost alone. For the experiments, the authors examined DL models such as TabNet, NODE, DNF-Net, 1D-CNN along with an ensemble that includes five different classifiers: TabNet, NODE, DNF-Net, 1D-CNN, and XGBoost.


Insider Versus Outsider: Navigating Top Data Loss Threats

While breaches from outside cybercriminals are becoming more complex and require more resources to combat, companies mustn’t lose sight of a data-loss cause closer to home – their employees. In their day-to-day positions, employees are entrusted with highly sensitive information, from financial and personally identifiable information (PII) to medical records or intellectual property. While employee error is a major source of security breaches, a well-trained employee who knows how to take the proper precautions is a key defense from attacks and breaches. Over the course of their daily responsibilities, employees can mistakenly share that information outside of the secure network. Often, this data loss occurs through email, such as mentioning restricted information in outside correspondence or attaching documents that may violate customer or patient privacy. For example, let’s say that an employee is working on a presentation that contains confidential data. They hit a roadblock while trying to fix a formatting issue and in their race to meet the looming deadline, they decide to reach out to a friend for help and send the presentation via email with the confidential data included.


Lawmakers Urge Private Sector to Do More on Cybersecurity

Treating cybersecurity as a core business risk and devoting the appropriate resources to it is now essential, said Tom Kellermann, head of cybersecurity strategy at software firm VMware Inc., who also sits on the Secret Service’s Cyber Investigation Advisory Board. “Cybersecurity should no longer be viewed as an expense, but a function of conducting business,” he said. Christopher Roberti, senior vice president for cyber, intelligence and supply chain security policy at the U.S. Chamber of Commerce, which says it is the world’s largest business association, said companies don’t stand a chance against determined nation-state attacks regardless of cybersecurity investments. Partnerships between the government and the private sector are essential, he said. “Businesses must take necessary steps to ensure their cyber defenses are robust and up to date, and the U.S. government must act decisively against cyber criminals to deter future attacks. Each has a role to play and both need to work closely to do more,” Mr. Roberti said.


AI Centers Of Excellence Accelerate AI Industry Adoption

It is important to note that there are several functional and operational models that enterprises are adapting in regard to CoE. The change management model focuses on emphasizing the prospective innovation that artificial intelligence can provide for business stakeholders in the organization. Central to this model is education and training of executives and business units. In addition to change management, the Sandbox approach is another central model, in which the CoE acts as the company’s hub of innovation and R&D. This model emphasizes proofs of concepts and different emerging technologies. The key is alignment of business units around POCs and being accountable for the initial launch and development of per-subject use cases. Lastly, the Launchpad model for the CoE leverages and builds upon the capabilities of existing data scientists, engineers, and developers. The CoE deploys top subject-matter experts to across departments to conduct hands-on training and education and scope out early stage business solutions.

Kubernetes: 5 tips we wish we knew sooner

“One thing that’s better to learn earlier than later with Kubernetes is that automation and audits have an interesting relationship: automation reduces human errors, while audits allow humans to address errors made by automation,” Andrade notes. You don’t want to automate a flawed process. It’s often wise to take a layered approach to container security, including automation. Examples include automating security policies governing the use of container images stored in your private registry, as well as performing automated security testing as part of your build or continuous integration process. Check out a more detailed explanation of this approach in 10 layers of container security. Kubernetes operators are another tool for automating security needs. “The really cool thing is that you can use Kubernetes operators to manage Kubernetes itself – making it easier to deliver and automate secured deployments,” as Red Hat security strategist Kirsten Newcomer explained to us. “For example, operators can manage drift, using the declarative nature of Kubernetes to reset and unsupported configuration changes.”
 


Quote for the day:

"Well, I think that - I think leadership's always been about two main things: imagination and courage." -- Paul Keating

Daily Tech Digest - June 19, 2021

What To Look At When Assessing Your Cybersecurity Vulnerability Risk

Each asset vulnerability is assigned a CVSS score that describes the technical risk. This score is a common, neutral framework based on the ease of exploitation and potential impact on confidentiality, integrity and availability. But CVSS scores alone don’t consider the asset’s value to the business. You should also ask the following three questions: 1. Is the asset the most important server in the organization (i.e., where critical data is stored)?; 2. Is it an application that processes business transactions (e.g., e-commerce)? Business criticality is a factor that must be considered and weighed in determining whether to prioritize the asset; 3. What is the threat level associated with the vulnerability? This is based on assessing activity by hackers around the vulnerability. Solution providers monitor this activity to provide threat intelligence data feeds to mature security organizations. All three factors need to be put together and correlated into a cyber risk score for the asset. A score like this allows the security teams to apply data-driven judgment to their prioritization and remediation of vulnerabilities found. 


Creating a Data Advantage: CIOs Discuss Best Practices

Nick Heudecker, Former VP of Data and Analytics at Gartner, claims “CIOs are at a strange crossroads with data today. They are both competing and collaborating with CDOs that may be taking their budget and staff, but they’re still ultimately responsible for data infrastructure.” Some, says Microsoft Security Architect Wayne Anderson, believe, unfortunately, that the CDO is a yellow flag to the CIO to look at their role in the organization.” Without questions, CIOs need to make IT a better partner. To do this, they need to connect with the business. They need to make data transformative happen and need to become a data champion. Former CIO Joanna Young argues for data to win there needs to be collaboration between technology functions. She says, however, “I have yet to see an organization with CIO, CTO, and CDO where the sum is greater than the parts, as roles and responsibilities get murky how can they be effective?” With this said, IT typically does not own the corporation’s data. For this reason, Sacolick says, “a key responsibility of the CIO and/or CDO is defining owners and educating them on their responsibilities especially for data quality.


Dear enterprise IT: Cybercriminals use AI too

AI-powered cyberattacks are conventional cyberattacks augmented with AI and machine learning technologies. Take phishing, for example — a type of social engineering where an attacker sends a message designed to trick a human into revealing sensitive information or installing malware. Infused with AI, phishing messages can be personalized to target high-profile employees at enterprises (like members of the C-suite) in a practice known as “spear phishing.” Imagine an adversarial group attempting to impersonate board members or send fake invoices claiming to come from familiar suppliers. Sourcing a machine learning language model capable of generating convincing-sounding emails, the group could fine-tune a system to generate replies that adopt the tone and tenor of the impersonated sender and even make references to previous correspondences. That might sound far-fetched — but there’s already growing concern among academics that tools like GPT-3 could be co-opted to foment discord by spreading misinformation, disinformation, and outright lies. Phishing emails need not be highly targeted to present a threat to organizations. 


Cisco Smart Switches Riddled with Severe Security Holes

There are also multiple other security flaws in the same web-management interface. For instance, the bug tracked as CVE-2021-1541 is an arbitrary code-execution vulnerability that would allow an authenticated, remote attacker to execute arbitrary commands as a root user on the underlying operating system. “This vulnerability is due to a lack of parameter validation for TFTP configuration parameters,” according to Cisco. “An attacker could exploit this vulnerability by entering crafted input for specific TFTP configuration parameters. A successful exploit could allow the attacker to execute arbitrary commands as a root user on the underlying operating system.” The attacker must have valid administrative credentials on the device in order to exploit the issue, so the CVSS score comes in at 7.2 rather than critical. The issue tracked as CVE-2021-1543, meanwhile, allows cross-site scripting from an unauthenticated, remote attacker (CVSS score: 6.1). “This vulnerability is due to insufficient validation of user-supplied input by the web-based management interface of the affected device,” according to Cisco. “An attacker could exploit this vulnerability by persuading a user to click a malicious link and access a specific page. ...”


How to Solve Kubernetes Persistent Storage Challenges

By design, containers work best with stateless applications. Kubernetes is able to create and remove containers in a rapid and dynamic manner because the applications within those containers come packaged with all of the dependencies they need to run. Regardless of where a new container is spun up — the same cluster or a different cloud provider — Kubernetes ensures that the application has access to the fundamental resources it needs to operate. The dynamic creation and deletion of containers doesn’t work well for applications that need to persistently store data. As a stateful, containerized application is created or destroyed across a Kubernetes cluster, it must always know where its data is, have a high degree of access to that data and be able to ensure its integrity. This isn’t possible if an application’s stored state is destroyed every time its container is spun down. Developers and platform managers want the best of both worlds: They want the fast failover, easy deployment and the efficiency of containers with the persistence of stateful workloads.


Mathematicians Prove 2D Version of Quantum Gravity Really Works

“It took us 40 years in math to make sense of four pages,” said Vincent Vargas, a mathematician at the French National Center for Scientific Research and co-author of the research with Rémi Rhodes of Aix-Marseille University, Antti Kupiainen of the University of Helsinki, François David of the French National Center for Scientific Research, and Colin Guillarmou of Paris-Saclay University. ... “Quantum gravity basically means random geometry, because quantum means random and gravity means geometry,” said Sun. Polyakov’s first step in exploring the world of random surfaces was to write down an expression defining the odds of finding a particular spiky planet, much as the bell curve defines the odds of meeting someone of a particular height. But his formula did not lead to useful numerical predictions. To solve a quantum field theory is to be able to use the field to predict observations. In practice, this means calculating a field’s “correlation functions,” which capture the field’s behavior by describing the extent to which a measurement of the field at one point relates, or correlates, to a measurement at another point. Calculating correlation functions in the photon field, for instance, can give you the textbook laws of quantum electromagnetism.


DevSecOps: An open source story

DevSecOps adds security into DevOps, further refining the concept to address code quality, security, and reliability assurance through automation, enabling continuous security and compliance. Organizations seeking to comply with Sarbanes Oxley (SOX), Payment Card Industry Data Security Standard (PCI DSS), FedRAMP, and similar programs are candidates for implementing DevSecOps. ... The more you move security mitigation to the left to tackle these issues in development, the more money you save. You also avoid potential negative headlines because your teams don't have to respond to issues in production, where remediation costs can soar way higher than if you caught them in your development environment. You can treat the move from DevOps to DevSecOps as another step in the DevOps journey. ... Even if you treat DevSecOps as another step in your DevOps journey, you can expect changes to your toolchain, roles on your DevOps and security teams, and how your groups interact. Over 60% of the respondents to GitLab's 2021 Global DevSecOps Survey report new roles and responsibilities because of DevOps, so prepare your people upfront and keep surprises to a minimum.


Thought Leadership Strategy: A Key ingredient for Professionals Who Want More Success

We’ve discussed personal branding extensively on this blog. And you can think of thought leadership as an extension of your personal brand — it can even become a goal in your branding and content marketing strategy. But what is it? Thought leadership taps into your industry expertise, your passion, and your talent, all to connect you with your audience. A thought leader produces valuable content that not only addresses the pain points of his/her audience but also delivers key insights and advice on industry trends that aren’t available elsewhere. One of the most advantageous benefits of thought leadership is that it is accessible to anyone. It doesn’t matter your background or degrees. If you have knowledge and expertise, you can become a thought leader! But there’s one caveat. While you can become a thought leader, it is not something that you can force. You can’t just label yourself as a “thought leader” and expect everyone to get the memo. This title has to be earned through great content, proven ability, and visibility.


What Is Blockchain?

How these new blocks are created is key to why blockchain is considered highly secure. A majority of nodes must verify and confirm the legitimacy of the new data before a new block can be added to the ledger. For a cryptocurrency, they might involve ensuring that new transactions in a block were not fraudulent, or that coins had not been spent more than once. This is different from a standalone database or spreadsheet, where one person can make changes without oversight. “Once there is consensus, the block is added to the chain and the underlying transactions are recorded in the distributed ledger,” says C. Neil Gray, partner in the fintech practice areas at Duane Morris LLP. “Blocks are securely linked together, forming a secure digital chain from the beginning of the ledger to the present.” ...  “As a reward for their efforts in validating changes to the shared data, nodes are typically rewarded with new amounts of the blockchain’s native currency—e.g., new bitcoin on the bitcoin blockchain,” says Sarah Shtylman, fintech and blockchain counsel with Perkins Coie.


How Workplace Culture Affects Workplace Performance

If you have ever visited a software shop where they turn out great products, you will have noticed the enthusiasm of the people for their work. You will also have noticed that these skilled people are given a working atmosphere that is conducive to their work. They feel safe to argue if they think they have a better answer; they are valued by their organisation, they are given an appropriate amount of time to do their tasks This is workplace culture. For the last two years, we, along with our partners in The Atlantic Systems Guild, have been writing a book on workplace culture. It is not the intention of this article to sell you a book (of course we would be delighted if you buy it) but to demonstrate some findings from our work. To have a useful view of culture, we identified six drivers of workplace culture. It is worth having a quick look at these drivers, because even a superficial glance will show you areas that you might work on to improve your own workplace culture, and thereby performance of the workplace.



Quote for the day:

"To do great things is difficult; but to command great things is more difficult." -- Friedrich Nietzsche

Daily Tech Digest - June 18, 2021

Does Cloud Computing Help or Harm the Environment?

Fortunately, getting data centers to rely on clean, renewable energy sources and use that energy more efficiently are far easier tasks than reducing the carbon footprint of the billions of digital storage devices that they've replaced. Here is where economic and environmental interests may overlap. Data center companies have every incentive to maximize the efficiency of their resources and reduce their cost. For that reason alone, the world's biggest data center companies—Amazon, Microsoft, and Google—have all begun implementing plans for their data centers to run on 100% carbon-free electricity. Amazon claims to be the world's largest renewable energy purchaser, consistent with its goals of powering its company with 100% renewables by 2025 and to become carbon net-zero by 2040. Microsoft has pledged to be carbon negative by 2030 and to remove from the atmosphere all the carbon the company has ever emitted since it was founded in 1975. To achieve this, it plans on having all of its data centers running on 100 renewable energy by 2025. And Google had already reached its 100% renewable energy target in 2018, though it did so in part by purchasing offsets to match those parts of its operations that still relied on fossil fuel electricity.


5 Keys to Creating a Zero-Trust Security Foundation

Recent high-profile attacks have disrupted global commerce across the world, bringing home the critical importance of maintaining a robust IT security program. The recent ransomware attacks on the Colonial Pipeline, the largest petroleum pipeline in the US, and meat supplier JBS, highlight the cascading, society-disrupting havoc these types of attacks can create. Those concerns increasingly extend to IoT devices, as evidenced by the recent hack of cloud-based security services firm Verkada, where bad actors gained access to 150,000 of the company’s cameras, including inside factories, hospitals, prisons, schools, and even police stations. Vulnerabilities come in many forms and we have known for a long time that the onslaught of IoT devices onto corporate networks is largely unprotected. It’s little wonder then that when the Ponemon Institute surveyed 4,000 security professionals and asked why breaches still happen, the top answer was the increasing attack surface. ... As a networking vendor, connecting people and things is part of Aruba’s core mission. 


4 ways AI can help us enter a new age of cybersecurity

Combine conventional threat intelligence (a list of all known cyberthreats to date) and use machine learning to understand risks. This should result in a better, more efficient system of threat detection and prevention. This can also help to identify any loophole or threat present in the data. In fact, machine learning can also be used to spot any abnormality or potential vulnerability in the midst of “normal” activity and warn users of a threat before it could compromise essential data. With the right systems in place, your hackers won't even realize that you know of their presence, so you can take immediate measures to ensure the safety of your digital infrastructure. ... In recent years, cryptocurrencies like Bitcoin and Ethereum have been rising in popularity. These cryptocurrencies are built upon blockchain, an innovative technical solution to store a secure, decentralized record of transactions. Blockchain can be used to enable medical records and help in security management by identifying criminal identity loopholes in the system. With blockchain technology, verification keys wouldn't be required anymore. If someone tries to hack the data, the system analyzes the whole mass of data chains. 


Moving From Digital Banking To Embedded Experiences

First and most importantly, banks and credit unions must focus on placing the consumer at the center of the organization, with product silos eliminated in favor of teams aligned around the customer journey. According to the research, 64% of the banking sector’s digital masters have “created personae and journey maps to identify and serve customers better.” Beyond that, it will be imperative to create an agility and flexibility in delivery similar to what exists in fintech and bigtech firms. This will most likely require changes in the composition of boards, top leadership and departmental management who can see banking from a new perspective. New operating models will also be required that will include the collaboration with third-party providers. There also needs to be support of open banking APIs that will enable the offering of new products both within and outside financial services. Bottom line, the infrastructure of banking as well as the perspective of banking’s role in the consumer’s life must change. According to Capgemini, 64% of banks are actively working with a wide ecosystem of partners – such as startups, incubators, technology firms, and even competitors – to co-develop solutions. 


Cybersecurity: Five key reminders for compliance teams

Cybersecurity breaches are not always the work of nefarious actors orchestrating a sophisticated hack. Damaging data breaches may be as likely to result from unintentional human error. Even seemingly benign behaviors –– using public Wi-Fi, neglecting to put passwords on computers and mobile devices, and clicking on bad links –– can be all it takes to give cybercriminals the access they need. It does little good to build a digital fortress if there aren’t adequate controls over who gains access, and under what circumstances. ... First, establishing clear SoD helps avoid conflicts that could lead to fraud or other abuse. For large organizations with multiple lines of business, this is particularly important. Investment professionals on a firm’s buy-side, for example, should not have access to the exact same data as those on the sell-side. SoD may also help prevent control failures that can occur when too many people have access to data for which they aren’t necessarily accountable for. By segregating duties (and data access), compliance teams are better positioned to spot weaknesses, while also ensuring that teams and individuals understand exactly what data should be in their purview and what may be off-limits.


The four Fs of employee experience

Ask yourself what it would take for employee experience to be a delight — for example, through gamified training modules or KPIs. We work with a leading technology firm that asked itself this very question and developed its tools for surveying employees accordingly, designing them to be simple and intuitive, satisfying, and not frustrating. The firm used layman’s terms and an appealing tone of voice in written content such as instructions, explanations, and requests. It avoided jargon. And it invested in interesting, stimulating visual interactions rather than ones that were bland and text-heavy — the new experience was less like a spreadsheet assignment to be endured and more of an opportunity to engage. ... Don’t neglect the foundations. Ultimately, employees have a right to expect that “it just works,” whether “it” is their human resources self-service portal, their expense management system, or their system interoperability. It’s also critical that user experience be accessible to all, including employees with any type of disability. 


Amex bets on AI and NLP for customer service

We started this journey [of leveraging AI] long before we applied machine learning to some other more mature use cases, including our fraud models and some credit risk models. And in the past couple years, especially in the past five years or so, we started to see with certainty that deep neural network models started to outperform almost every other machine learning model when it comes to high dimensional data and highly unstructured data. We not only deal with some of the traditional fields, like customer transactions, but also there are tax consequences and volume history data. Neural network models can effectively deal with all of that. ... First, I think it’s really about recognizing patterns. And if you look at certain use cases where you have customer behavior that’s being repeated and you can expedite that behavior, then that tends to be a real sweet spot for machine learning capabilities. The other thing I would add is we take the decision to apply machine learning techniques quite seriously. We have an entire AI governance board that cross-checks all the models that we build for bias and privacy concerns. So even taking the approach of AI, we have to justify to a number of internal teams why it makes sense.


‘Debt’ as a Guide on the Agile Journey: Technical Debt

If network infrastructure is not your specialty, you might question how much requirements for connectivity could really change over 10 years? Does the Network Team really need to develop a completely new solution and live the DevOps dream? The answer to that is a resounding yes! Today’s (not to mention tomorrow’s) requirements for security features and performance are significantly different from 10 years ago; the network infrastructure is key in the cyber security area of protecting vital business processes and applications by controlling data traffic, and the network must support the vastly increasing amount of data traffic that is the result of new streaming and IoT services, for instance. The Network Team was not able to deliver to these expectations with the legacy technology that we were fighting to operate and maintain, and thus, the business was impacted. Internally, the Network Team themselves were also impacted. They felt the heat from several CXOs who were frustrated that they couldn't satisfactorily support top priorities such as the cyber security agenda.


Deep reinforcement learning will transform manufacturing as we know it

For many large systems, the only possible way to find the best action path is with simulation. In those situations, you must create a digital model of the physical system you want to understand in order to generate the data reinforcement learning needs. These models are called, alternately, digital twins, simulations and reinforcement-learning environments. They all essentially mean the same thing in manufacturing and supply chain applications. Recreating any physical system requires domain experts who understand how the system works. This can be a problem for systems as small as a single fulfillment center for the simple reason that the people who built those systems may have left or died, and their successors have learned how to operate but not reconstruct them. Many simulation software tools offer low-code interfaces that enable domain experts to create digital models of those physical systems. This is important, because domain expertise and software engineering skills often cannot be found in the same person.


Multicluster Management with Kubernetes and Istio

Do you have multiple Kubernetes clusters and a service mesh? Do your virtual machines and services in a Kubernetes cluster need to interact? This article will take you through the process and considerations of building a hybrid cloud using Kubernetes and an Istio Service Mesh. Together, Kubernetes and Istio can be used to bring hybrid workloads into a mesh and achieve interoperability for multicluster. But another layer of infrastructure — a management plane — is helpful for managing multicluster or multimesh deployments. ... Using Kubernetes enables rapid deployment of a distributed environment that enables cloud interoperability and unifies the control plane on the cloud. It also provides resource objects, such as Service, Ingress and Gateway, to handle application traffic. The Kubernetes API Server communicates with the kube-proxy component on each node in the cluster, creates iptables rules for the node, and forwards requests to other pods. Assuming that a client now wants to access a service in Kubernetes, the request is first sent to the Ingress/Gateway, then forwarded to the backend service 



Quote for the day:

"A good leader can't get too far ahead of his followers." -- Franklin D. Roosevelt

Daily Tech Digest - June 17, 2021

A Deep Dive Into Efinity: Next-Generation Blockchain for NFTs

Efinity will be a hub for all fungible and non-fungible tokens, meant to serve and benefit all participants in the digital asset space—collectors, creators, artists, decentralized app (dApp) developers, enterprises, sports teams, and more. The Enjin ecosystem is robust, with a wide range of projects and developers using our products to create, distribute, and integrate NFTs with their projects. Over 1.14 billion digital assets have already been created with Enjin. All of these tokens can benefit from the cost efficiency, speed, and next-generation features of Efinity—and that’s only the existing Enjin ecosystem. We believe Efinity will do for the wider NFT ecosystem what ERC-1155 did for Ethereum: make NFTs even more accessible to everyone. We expect end-users to create NFTs with the same ease and as intuitively as they take a picture with a smartphone today; trade NFTs faster than they can purchase something from Amazon; and most importantly, use those tokens in a myriad of futuristic ways. It’s up to companies and developers across the world to give that next-gen utility to NFTs, and truly unlock their power to the masses.


A Look at a Zero Trust Strategy for the Remote Workforce

If you are new to the security world, it is fair to ask yourself, “Isn’t access to data and systems always conditional? Isn’t it always granted to someone who has access to the credentials (ID and password)?” True enough, but in totality, the approach to managing access encompasses a broader spectrum of privacy policies. These policies include a mix of different strategies that can be applied based on an organization’s security vulnerabilities. Conditional access is one such security management practice that many companies have opted for. The shift to smart mobile devices and cloud has made it necessary to ensure conditional access. Further, this has become imperative, as remote working is here to stay. With several companies making announcements about permanent work-from-home policies, a zero-trust model of conditional access has become crucial. IT security teams must be prepared to both validate and verify devices and users with a set of automated policies. IT teams could easily monitor incoming IP addresses as the first step for identifying credentials. However, growing use of VPNs coupled within a remote working environment is making that impossible, thus rendering organizations more vulnerable to threats.


Most firms face second ransomware attack after paying off first

The majority of businesses that choose to pay to regain access to their encrypted systems experience a subsequent ransomware attack. And almost half of those that pay up say some or all their data retrieved were corrupted. Some 80% of organisations that paid ransom demands experienced a second attack, of which 46% believed the subsequent ransomware to be caused by the same hackers. Amongst those that paid to regain access to their systems, 46% said at least some of their data was corrupted, according to a Cybereason survey released Wednesday. Conducted by Censuswide, the study polled 1,263 security professionals in seven markets worldwide, including 100 in Singapore, as well as respondents in Germany, France, the US, and UK. Globally, 51% retrieved their encrypted systems without any data loss, while 3% said they did not regain access to any encrypted data. The report revealed that one particular organisation reportedly paid up a ransomware amount in the millions of dollars, only to be targeted for a second attack by the same attackers within a fortnight.


Top 10 Security Risks in Web Applications

Injection or SQL injection is a type of security attack in which the malicious attacker inserts or injects a query via input data (as simple as via filling a form on the website) from the client-side to the server. If it is successful, the attacker can read data from the database, add new data, update data, delete some data present in the database, issue administrator commands to carry out privileged database tasks, or even issue commands to the operating system in some cases. ... It is a case where the authentication system of the web application is broken and can result in a series of security threats. This is possible if the adversary carries out a brute force attack to disguise itself as a user, permitting the users to use weak passwords that are either dictionary words or common passwords like “12345678”, “password” etc. This is so common because shockingly 59% of the people use the same passwords on all websites they use. Moreover, 90% of the passwords can be cracked in close to 6 hours! Therefore, it is important to permit users to use strong passwords with a combination of alphanumeric and special characters. This is also possible due to credential stuffing, URL rewriting, or not rotating session IDs.


A Google AI Designed a Computer Chip as Well as a Human Engineer—But Much Faster

Human designers thought “there was no way that this is going to be high quality. They almost didn’t want to evaluate them,” said Goldie. But the team pushed the project from theory to practice. In January, Google integrated some AI-designed elements into their next-generation AI processors. While specifics are being kept under wraps, the solutions were intriguing enough for millions of copies to be physically manufactured. The team plans to release its code for the broader community to further optimize—and understand—the machine’s brain for chip design. What seems like magic today could provide insights into even better floorplan designs, extending the gradually-slowing (or dying) Moore’s Law to further bolster our computational hardware. Even tiny improvements in speed or power consumption in computing could make a massive difference. “We can…expect the semiconductor industry to redouble its interest in replicating the authors’ work, and to pursue a host of similar applications throughout the chip-design process,” said Kahng.

Jensen Huang On Metaverse, Proof Of Stake And Ethereum

For a long time now, Proof of stake has been baffling people interested in crypto and its application in various platforms like Twitter and Project Bluesky. Jensen’s views on the matter have also been favourable to the concept that might replace proof of work in blockchain shortly. He said that the demand for Ethereum had reached such a level that it would be nice to have another method of confirming transactions. “Ethereum has established itself. It now has an opportunity to implement a second generation that carries on that platform approach and all of her services that are built on top of it, he added” Jensen also explained that the reason behind the development of Nvidia’s CMP was the expectation that a lot of Ethereum coins will be mined. CMP has enough functionality that it can be used for crypto mining. ... Addressing the question of how long the chip shortage will last, Jensen said that demand has been growing up consistently, and Nvidia particularly has had pent-up demand since it had reset and reinvested computer graphics, a driving factor in skyrocketing demand. 


Prioritizing and Microservices

Microservices frequently need to communicate with one another in order to accomplish their tasks. One obvious way for them to do so is via direct, synchronous calls using HTTP or gRPC. However, using such calls introduces dependencies between the two services involved, and reduces the availability of the calling service (because when the destination service is unavailable, the calling service typically becomes unavailable as well). This relationship is described by the CAP theorem (and PACELC) which I've described previously. .... If any response is necessary, the processing service publish an event, which the initiating service can subscribe to and consume. ... The issue with this approach is that the prioritization is only applied at the entrance to the system, and is not enforced within it. This is exacerbated by the fact that the report orchestrator has no FIFO expectation and in fact can begin work on an arbitrary number of commands at the same time, potentially resulting in a very large amount of work in process (WIP). We can use Little's Law to understand how WIP impacts the time it takes for requests to move through a system, which can impact high priority SLAs. Constraining total WIP on the system, or at least on the orchestrator, would mitigate the issue.


Cloud Outage Fallout: Should You Brace for Future Disruption?

The outage also put other topics in focus that might not have received consistent attention in the past. Though DevOps is frequently talked about in enterprise development circles, Bates questions to what degree it is being implemented. “If we can truly get to a DevOps world, securing development and operations, it’s going to help a lot,” he says. “We talk very glibly about DevOps, but we don’t ask the really hard questions about if anyone is really doing this.” Taken into context of sudden moves to the cloud in response to the pandemic, the Fastly outage was a relatively quick blip, says Drew Firment, senior vice president of transformation with cloud training platform A Cloud Guru. The incident does offer a moment for reflection for organizations. “Folks are looking at their cloud architecture,” he says. “Architecture equals operations.” As organizations build in the cloud, decisions on cloud providers and services can have a dramatic effect on resiliency, Firment says. “That’s why cloud architects are in such demand, especially if they can take those things into consideration.”


Proactive and reactive: symbiotic sides of the same AI coin

Artificial Intelligence (AI) as a phrase is bandied about to refer to any number of technologies currently in use. And it’s not that this is wrong per se, but it’s like referring to rustic Italian cuisine and molecular gastronomy simply as “food”. The world would be a poorer place without either, but they serve entirely separate purposes for the palate. According to Gartner, “By 2025, proactive (outbound) customer engagement interactions will outnumber reactive (inbound) customer engagement interactions.” The distinction being made here is the AI as it is designed for use in the reactive realm (think chatbots) vs. the use case of proactive engagement. While the core technology that underlies both may be similar, and both have specific use cases, proactive engagement is a more focused utilisation. If you have ever attempted to play the game ‘Twenty Questions’, you have had an inkling of what a chatbot is attempting to do, i.e., asking a series of questions of an individual in an effort to get at an answer. Except in the case of chatbots, you are usually playing the game with an irate customer in a negative frame of mind. 


Are your cryptographic keys truly safe? Root of Trust redefined for the cloud era

When you are working with cloud infrastructure, the hardware (and in many cases also the software) is not under your control. This is also true of cloud-based HSMs provided by cloud service providers (CSPs). You need to look no further than the CLOUD Act to realize that your CSPs have immediate access to your keys and data. This is not theoretical access – this report published by Amazon details the law enforcement data requests with which Amazon complied over a six month period in 2020. It’s not a big jump to imagine an insider at your CSP exploiting this ability to expose your keys. While CSPs make genuine efforts to secure their hardware under the Shared Responsibility Model, the nature of the beast is that using third-party infrastructure also leaves you vulnerable to supply chain attacks. Consider the attack on SolarWinds and imagine the repercussions of your CSP – and by extension you – falling victim to such a large-scale supply chain attack. It’s clear that the implementation of Root of Trust as a purely hardware solution deployed in a single location needs to move with the times.



Quote for the day:

"No person can be a great leader unless he takes genuine joy in the successes of those under him. -- W. A. Nance