Daily Tech Digest - November 04, 2022

Can today’s videoconferencing tech evolve into tomorrow’s metaverse?

The market continues to reject headset options for virtual reality (VR), with the largest recent failure being 3D TV, which had relatively light and inexpensive headsets compared to other augmented and VR solutions. There are two ways to approach this, and they aren’t mutually exclusive. One is to eliminate the headset and use a different technology such as “hard light” or LED walls. Another, more likely near-term path is to create headsets that have far broader applicability than current headsets do. This means making them more attractive to wear and providing a compelling secondary use (such as watching video entertainment, privacy and security, and safety). If I want to use a headset because it does something I want, while also being useful for videoconferencing, I’m more likely to try it for collaboration. Right now, despite the hype, the metaverse isn’t real enough to be compelling. And headsets are tied tightly to VR experiences that aren’t going to drive their use en masse. This leads to an imbalance between cost, appearance, and utility.


Fill the cybersecurity talent gap with inquisitive job candidates

Curiosity is also critical when entering the cybersecurity field. Especially for those coming from an atypical background, curiosity can lead to the discovery of solutions that may have otherwise been overlooked. It can help them figure out how hackers think and behave, and influence proactive defense strategies after being able to step into their shoes. Curious minds can further lead to the discovery of additional interests within the many facets of the field, making those individuals more well-rounded cybersecurity professionals. ... Another important quality hiring teams can look for in potential cybersecurity candidates is a strong willingness to learn. This encompasses both tenacity and curiosity: Those who are determined and interested in discovering new information are consistently willing and ready to face new challenges. Cybersecurity can be complex and multifaceted, and those who can be patient and take the time to learn the breadth and depth of the field can be successful in unique ways.


Looking for a remote work job? It's getting harder to find one

"In many ways, employees still hold the power to demand more from their employers when it comes to salary, flexibility and benefits. But this power balance is likely to start levelling out in the coming months," she said. Employees and jobseekers are also bracing for an economic slump, with LinkedIn finding that candidates' confidence in their ability to improve their financial situation has "decreased or remains low" compared with August 2022. Guy Berger, principal economist at LinkedIn, said that while employers could not eliminate uncertainty in the year ahead, they could at least "mitigate it" for employees by putting more effort into supporting employee morale. "Consider relatively low-cost, high-value benefits that you might have overlooked before," said Berger. "Don't underestimate the calm that can follow when you reassure employees that you hear them, and that times aren't tough forever." Indeed, salary isn't the only thing employees care about in their careers: work-life balance, flexible-working arrangements and upskilling all rank highly, LinkedIn found.


Solving the Culture Conundrum in Software Engineering

The role of the software engineer has changed; it is no longer about writing code in isolation without much regard for or knowledge of how it benefits the business. Developers work better when they have clarity about the direct impact their work will have on achieving business goals and on the bottom line. It’s down to business leaders to communicate these challenges and goals (in other words, understand the “why”) to help software developers understand what they’re trying to achieve. But doing so in a way that moves towards a better working culture requires a new approach to building and managing software development teams. The first step is casting aside the negative stereotypes many have of software engineers and celebrating the intellectual and cultural diversity within their teams. Diversity of personnel brings a diversity of personalities, which is crucial to creating more inclusive cultures that accept and welcome all characters with open arms. While this may seem obvious, what is often overlooked is the impact diversity can have on stimulating and increasing innovation.


8 bad communication habits to break in IT

“IT leaders are incredibly talented and well-versed in what they do. They know the problem and solution well, and they’re often eager to point out the features and functionality when fielding questions from the end user. “However, when addressing questions from the business, IT leaders need to take a step back to ensure they’re answering the correct question for the right audience. Failing to do so can lead to confusion and credibility gaps. When you mix technology and business professionals, the way you answer questions may need to shift. Make sure you understand in detail what’s being asked and determine how to answer the question in a way that will make the most sense to them. The more IT leaders listen carefully and ask clarifying questions when needed, the better they’ll become at communicating.” ... “As a profession, technologists don’t have a strong reputation as great listeners. We have a bad habit of hearing and immediately responding, which makes it seem like we’re not listening. “We teach IT professionals the H.E.A.R. model: hear, empathize, analyze, respond. This is especially important when we need to have a difficult conversation, like addressing an idea that isn’t practical ... "


Startups Scratch the Surface of AGI Without Really Understanding It

Researcher and author Gary Marcus has often pointed out how contemporary AI’s dependence on deep learning is flawed due to this gap. While machines can now recognise patterns in data, this understanding of the data is largely superficial and not conceptual—making the results difficult to determine. Marcus has said that this has created a vicious cycle where companies are caught in a trap to pursue benchmarks instead of the foundational ideas of intelligence. This search for clarity pushed a lot of interest into interpretability and the money followed later. Until a couple of years ago, explainable AI witnessed its time in the spotlight. There was a wave of core AI startups like Kyndi, Fiddler Labs and DataRobot that integrated explainable AI within them. Explainable AI started gaining traction among VCs, with firms like UL Ventures, Intel Capital, Light Speed and Greylock seen actively investing in it. A report by Gartner stated that “30% of government and large enterprise contracts will require XAI solutions by 2025”.


Is an Outsourced DPO Function the Answer?

Some DPO duties lend themselves to being carried out by a third party outside the business, such as the volume tasks mentioned above, but for others it will be more appropriate to carry them out in-house. For example, effective data mapping requires an intricate knowledge of the company’s day-to-day business processes that may be difficult to communicate to a third-party provider. Likewise, an internal DPO may find it easier to monitor the company’s ongoing data protection compliance, given their involvement in the organisation’s operations. Businesses may therefore wish to consider a hybrid approach, whereby some DPO functions are contracted to an external provider while certain duties are fulfilled within the organisation. Which processes are outsourced and which processes remain internal will depend on the specific processing activities carried out and where internal capabilities and strengths lie. Experts could also be engaged to work with a business to create an internal privacy framework which is then applied uniformly both internally by staff, and externally by an outsourced DPO function.


How Apiiro leverages application security for the software supply chain

Because cybercriminals look to exploit any vulnerabilities they can find in an organization’s application stacks, both security teams and developers need to be extremely proactive at pinpointing and remediating vulnerable applications and code throughout the software supply chain. Apiiro aims to do this by enabling developers to discover every API, service and artifact to create a software bill of materials (SBOM), as well as to identify exposed secrets, AOPI and OSS vulnerabilities and misconfigurations that increase risk. “The unrelenting demand for next-generation application security solutions has allowed us to deploy our product at scale with leading Fortune 500 customers,” said Idan Plotnik, cofounder and CEO of Apiiro. “Early innovation enables us to grow faster and more efficiently than the competition, and we are building the company for hyper-growth. The combination of our team, business momentum, and support from top-tier investors positions Apiiro to continue to lead a growing industry.”


Key Basic Principles to Secure Kubernetes’ Future

While Kubernetes is designed to be secure, only responding to requests that it can authenticate and authorize, it also gives developers bespoke configuration options, meaning it is only as secure as the role-based access control (RBAC) policies that developers configure. Kubernetes also uses what’s known as a “flat network” that enables groups of containers (or pods) to communicate with other containers by default. This raises security concerns as, in theory, attackers who compromise a pod can access other resources in the same cluster. Despite this complexity, the solution to mitigate this risk is fairly straightforward: a zero trust strategy. With such a large attack surface, a fairly open network design, and workloads sitting across different environments, a zero trust architecture, one that never trusts and always verifies, is crucial when building with Kubernetes. ... All internal requests are considered suspicious, and authentication is required from top to bottom. This strategy helps mitigate risk by assuming threats exist on the network at all times, and so strict security procedures are constantly maintained around every user, device and connection.


Stemming the Security Challenges Posed by SaaS Sprawl

Corey O’Connor, director of products at DoControl, a provider of automated SaaS security, notes that remote and hybrid working models made a significant impact on both SaaS utilization and sprawl. “When they started to gain traction, CIOs responded by allowing the business to use whatever tools necessary to enable the business,” he explains. “This challenged CISOs as well as IT and security teams given the surge in SaaS adoption and utilization.” This created security gaps that needed to be addressed as organizations began to navigate the “new normal” for working environments. “With the workforce now more in a decentralized nature, there's a critical need to centralize security throughout all the disparate SaaS applications meant to drive business enablement,” O'Connor says. Ofek agrees, noting as more organizations adopt hybrid work models, security and IT teams will need to devise new processes, policies, and controls around SaaS applications to allow for secure but easy access--and it starts with visibility.



Quote for the day:

"Don't necessarily avoid sharp edges. Occasionally they are necessary to leadership." -- Donald Rumsfeld

Daily Tech Digest - November 02, 2022

This would be a good time to test your cloud ROI

Hansson argues that the cloud at one point made sense for his business, but no longer does. “Yet by continuing to operate in the cloud, we're paying an at times almost absurd premium for the possibility that it could (be needed). It's like paying a quarter of your house's value for earthquake insurance when you don't live anywhere near a fault line,” Hansson wrote. “We're paying over half a million dollars per year for database (RDS) and search (ES) services from Amazon. Yes, when you're processing email for many tens of thousands of customers, there's a lot of data to analyze and store, but this still strikes me as rather absurd. Do you know how many insanely beefy servers you could purchase on a budget of half a million dollars per year?” He then addressed the “but you need to pay people to manage those servers” issue. “Anyone who thinks running a major service like HEY or Basecamp in the cloud is simple has clearly never tried," he said. "Some things are simpler, others more complex, but on the whole, I've yet to hear of organizations at our scale being able to materially shrink their operations team just because they moved to the cloud.”


Top 5 Security Actions Every CEO Should Take

Adopt and Demonstrate a Proactive Mindset - At first, this may seem like an obvious reiteration of an accepted business practice. However, organizations take this lightly far too often. A CEO’s direct involvement with cybersecurity practices must herald noticeable changes. This should be most evident in an organization's mindset towards implementing any proposed transformations. All policies enacted should reflect an active privacy and security governance model that adopts a proactive approach to resolving and mitigating all security challenges rather than relying on a reactive response. ... Conduct Rigorous Assessments - A critical practice that most organizations often shy away from is implementing a consistent assessment regime that thoroughly evaluates systems and mechanisms to ensure cybersecurity standards are up to par. Yes, it’s a monotonous job, which may be why most organizations often overlook the simple fact that it is not enough just to have sufficient measures and mechanisms in place. It is equally important to ensure that these measures are cross-checked and regularly run through assessments validating their effectiveness.


Some rain in the clouds

In the bad old days of on-premises data centers, if you bought a server, you owned it. No matter how generous the discount you negotiated with your hardware vendor, once they sold it to you, it really didn’t matter how little you made the CPU spin—they weren’t going to give you any money back. Fast forward to the days of cloud computing, by contrast, and it’s a fundamental principle that you pay for what you use. Use less, pay less. Does this mean enterprises may elect to use fewer cloud computing resources in a downturn? Of course it does. Is that a good thing? Absolutely. Why? Because it’s a customer-centric view rather than a vendor-centric view. Each of the cloud providers understands this, which is why their executives were united in praising, not lamenting, the ability of customers to spend less when times are hard. Alphabet/Google CEO Sundar Pichai introduced this theme, arguing that “the long-term trends that are driving cloud adoption continue to play an even stronger role during uncertain macroeconomic times.” Namely, cloud yields flexibility for enterprises to scale up or down based on their needs.


Cyberattacks Are Bypassing Multi-Factor Authentication

In addition to compromising MFA platforms and tricking employees into approving illegitimate access requests, attackers are also using adversary-in-the-middle attacks to bypass MFA authentication, according to a report released by Microsoft’s Threat Intelligence Center this summer. More than 10,000 organizations have been targeted by these attacks over the past year, which work by waiting for a user to successfully log into a system, then hijacking the ongoing session. “The most successful MFA cyber-attacks are based in social engineering, with all types of phishing being the most commonly used,” said Walt Greene, founder and CEO at consulting firm QDEx Labs. “These attacks, when carried out properly, have a fairly high probability of success to the unsuspecting user.” It’s clear that MFA alone is no longer enough and data center cybersecurity managers need to start planning ahead for a post-password security paradigm. Until then, additional security measures should be put in place to strengthen access controls and limit lateral motion through data center environments.


How asset-oriented platforms will change the trajectory of Web3

Web3 is all about leveraging assets — tokens or NFTs — to create systems of incentives to deliver products and services in ways that are more automated, trusted, and permission-minimized. You can’t have DeFi, identity solutions, or Decentralized Autonomous Organizations (DAOs) without assets that grant some form of rights or responsibilities when participating in a network. But building an asset in today’s Web3 is the same as setting up your own web infrastructure in the early 2000s; everyone is doing everything themselves. To catalyze Web3 adoption, developers must be able to leverage (and improve upon) the work others have done so far. Due to needing to copy-paste code, developers can’t easily reuse others’ code on-ledger. The result is redundant code clogging networks, leading to increased transaction costs and billions of dollars of security breaches. Then comes the aspect of composability, the feature that allows for interconnected decentralized applications and protocols. 


6 Habits to Include in Your Daily Routine for a Long, Happy Career as a Data Scientist

Microlearning is a verifiable way to pick up new data science skills in less than 10 minutes per day. Developing this habit is a great way to keep you interested in advancing your skills as a data scientist by picking up new technologies or ways of doing things. Medium, Reddit, Substack, and various podcasts (see below) are great sources of information about new advances in data science that may inspire you to try learning something new. The key for adult learners is to keep the learning short and pointed toward a specific, tangible goal. This means keeping the learning to short 10-minute blocks with objectives that are easily achievable within that time. Not only does this keep you motivated to keep moving forward in your studies because of the short time they take to complete but they also ensure that you’re advancing your skills after a study session. Furthermore, it doesn’t seem like a hardship to complete a habit that takes less time than you need for a coffee break. In my experience, taking 10 minutes a day to work on a skill doesn’t provide huge gains immediately, but compounds slowly over time to produce something you can be proud of at the end of a year.


How to improve security awareness and training for your employees

You need to measure the true outcomes of your security training and not just look at employee participation as a statistic. Consider the employee behaviors you’d like to see change as a result of the training, and then, determine if they actually do change Such behaviors include correctly classifying sensitive emails to be encrypted, following security warnings, not falling for phishing emails and avoiding general human errors. These can all be measured to determine if your training is truly having a positive effect. Rather than offer the same generic training to all employees, tailor your training to individuals based on history, needs, job role and other factors. You might start out by using security questionnaires to gauge the level of risk among different employees. Then, consider an employee’s job role and level of seniority to determine how likely they are to be targeted by cyberattacks. Next, assess the risk of an employee accidentally or intentionally causing a security incident over privileged data or sensitive systems. 


The data flywheel: A better way to think about your data strategy

Once you’ve identified a problem worthy of solving, the next step is to capture the data you need to solve it. If you’ve defined your problem well, you’ll know what that data is, which is key. Just as defining your problem narrows the variety of data you might capture, figuring out what data you need, where to get it, and how to manage it will narrow the vast catalog of people, processes, and technologies that could compose your data environment. Consider how this played out for Alina and ChampionX. Once the team knew the problem—site visits were costly—they quickly identified the logical solution: Reduce the number of required site visits. Most visits were routine, rather than in response to an active problem, so if ChampionX could glean what was happening at the site remotely, they could save considerable time, fuel, and money. That insight told them what data they would need, which in turn allowed ChampionX’s IT and Commercial Digital teams to discern who and what they needed to capture it. They needed IoT sensors, for example, to extract relevant data from the sites. 


9 dark secrets of the federated web

Much of the speed on the internet relies on smart caching policies. There's a drawback for federated architectures, though, which can run into legal and practical hassles with caching. A friend spent months redoing the checkout system for an online store where he worked. Credit card processors had rules against caching, which caused some of his biggest performance problems. Federated sites may be willing to share information one time, but they may also have strict rules about how much data you can retain from the interaction. Perhaps they’re worried about security, or they could be worried you’ll cache enough data that you won’t need them anymore. In any case, caching is often a hassle with federated sites. ... One way that sites try to simplify federated relationships is to store authorizations and keep them working for months or years. On one hand, users like saving the time it takes to reauthorize. On the other hand, they often forget that they’ve authorized some distant server, which can become a security hole. 


CIOs and Cutting IT Bloat: Forming a Plan

“When we talk about IT bloat, we’re talking about IT service management spend on software or tools that you’re not getting the full value of,” explains Jenna Cline, head of IT strategy and planning at Atlassian. “We’re not talking about people but rather tools that were purchased that your team isn’t using to its full potential and doesn’t need.” She says the second important thing to consider is that bloat is relative. “We’re not seeing IT spend decreasing -- but in comparison to the year-over-year increasing budgets that we’ve seen for the past decade, stagnating IT budgets may feel like a decrease,” she says. To measure whether a tool is providing maximum value to your team, Cline says she likes to look at four key categories: usage, time to value, total cost of ownership, and growth. For usage, it's important to consider whether the applications, licenses, and services that the organization has invested are being used, and if they are a “right-sized fit” for the firm. “This means you must know if you have a plan to use all these features or access at the level of investment we’ve engaged at,” Cline says.



Quote for the day:

"Leadership is, among other things, the ability to inflict pain and get away with it - short-term pain for long-term gain." -- George Will

Daily Tech Digest - November 01, 2022

Zscaler's Cloud-Based Cybersecurity Outages Showcase Redundancy Problem

Businesses also should ensure that they realize the cloud security and reliability is a shared responsibility model. Cloud providers — including cybersecurity services based in the cloud — are responsible for their infrastructure, but companies should architect their cloud or hybrid infrastructure to handle outages. Companies that know their cloud vendor's architecture will be better prepared for outages, says CSA's Reavis. "It is important to understand how the provider achieves redundancy in its architecture, operating procedures including software updates, and its footprint of global data centers," he says. "The customer should then understand how that redundancy satisfies their own risk requirements and if their own architecture takes full advantage of the redundancy capabilities." Business customers should also regularly re-evaluate their technology landscape and risk profile. While network and power failures — and natural disasters — used to dominate resilience discussions, malicious threats such as ransomware and denial-of-service attacks are often more likely to dominate discussions today, says Forrester's Maxim.


How Intuit’s Platform Engineering Team Chose an App Definition

An application definition is an operational runbook that describes in code everything an application needs to be built, run and managed. ... “Our main requirements for the app spec needed to be application-centric; there shouldn’t be any leakage of cloud or Kubernetes resources into the application specification. And it had to meet the deployment as well as the operational needs of the application,” she said. “The two choices we had were the Open Application Model, which suited our needs pretty well. Or we could go with a templating style model where you had to provide a bunch of input parameters. But there was also a lot of abstraction leaked into the application spec. So it was easy for us to go with an application OAM-style specification.” At a high level, the developer should be able to describe his intent, “This is the image that I want; here are my sizing needs both horizontal and vertical. And I had a way to override these traits, depending on my environment, and be able to generate the Kubernetes resources.”


Into the Metaverse: Making the Case for a Virtual Workspace

“The ability to market and educate through the metaverse and VR is extremely important because businesses are able to represent their products in a different way and touch their consumers on a new level,” he says. “Additionally, consumers will have social experiences that are frictionless, particularly as technology improves.” As the technology becomes more intuitive to use, it’ll be easier for consumers to get involved and discover things for the first time through the metaverse. Edelson says from a marketing perspective, the metaverse allows brands to implement creative marketing strategies that they could not necessarily do if they had limited real estate or shelf space in a store. “From a sales standpoint, the metaverse provides additional digital touchpoints and KPIs,” he adds. “Businesses have the ability to judge and understand consumer sentiment around how they are interacting with a virtual product or good, providing key and valuable data.” He adds that TradeZing is a content-based business, so the company examines and seeks content through a different medium such as VR or the metaverse.


Geo-Distributed Microservices and Their Database: Fighting the High Latency

The microservice instances of the application layer are scattered across the globe in cloud regions of choice. The API layer, powered by Kong Gateway, lets the microservices communicate with each other via simple REST endpoints. The global load balancer intercepts user requests at the nearest PoP (point-of-presence) and forwards the requests to microservice instances that are geographically closest to the user. Once the microservice instance receives a request from the load balancer, it’s very likely that the service will need to read data from the database or write changes to it. And this step can become a painful bottleneck—an issue that you will need to solve if the data (database) is located far away from the microservice instance and the user. In this article, I’ll select a few multi-region database deployment options and demonstrate how to keep the read and write latency for database queries low regardless of the user’s location. So, if you’re still with me on this journey, then, as the pirates used to say, “Weigh anchor and hoist the mizzen!” which means, “Pull up the anchor and get this ship sailing!”


How to ensure 5G wireless network security

The RAN (Radio Access Network) is the actual antenna carrying the radio waves at the 5G spectrum. As Scott Goodwin from DigitalXRAID says: “They are the very edge of the whole ecosystem, on cell towers placed as close to end users as possible and act as a conduit from the radio world to packet-switched or IP-based digital network.” The RAN presents unique challenges, not least the risk of physical damage to antennas — not necessarily deliberate, but potentially so. Consider, for example, how during the height of the Covid epidemic, some individuals blamed 5G for exacerbating Covid and tried to damage the masts. For organisations trying to ensure 5G wireless network security, there isn’t much they can do to protect cell towers — these are outside their control. But they will need contingency plans; they will need to plan to ensure business continuity. In short, they will need to understand the risks and plan accordingly. “I think that a lot of organisations and enterprises have got to start to think in new ways about business continuity,” says Belcher, “especially in light of what’s happened over the last couple of years.”


Kent Beck: Software Design is an Exercise in Human Relationships

The first relationship is between the idea being explored and the behaviour that will bring that idea into being. The relationship is bidirectional and evolving - the idea defines the behaviour and the existence of the behaviour can generate more ideas (now that I see this, what about that…). Underneath the idea and behaviour is the structure of the system. This is the architecture and it has a profound effect on the visible behaviour. The structure of the system has a profound effect on the behaviour, and on the ideas that get generated because the behaviour can change. Structure influences behaviour, which results in new ideas requesting changes to behaviour which can have an impact on structure, in a continuous loop. The Tidy First? workflow causes us to pause and ask if the structure needs to be changed or should we just implement the behaviour change. If the change needs to impact the structure then Tidy the structure First - refactor the underlying architecture as needed. Do NOT try to make structure and behaviour changes at the same time, as that is where systems devolve in technical debt.


When Is It Time to Stop Using Scrum?

Some indicators help Scrum teams understand whether their progress as a team, as well as the product’s increasing maturity, justify inspecting their original decision to use Scrum, for example:Increasing product maturity leads to progress becoming more steady and less volatile; there are fewer disruptions in delivering Increments. There are fewer match points to score, and progress becomes more incremental—focusing on minor improvements—as the “big features” are already available. The Scrum team experiences growing difficulties in creating team-unifying Sprint Goals; there is a growing number of Product Backlog items that cannot be clustered under one topic. Reduced volatility results in an expanding planning horizon. At least, there is a temptation to plan further ahead, which also suits the team’s stakeholder relationship management. Stakeholders gain more trust in the team’s capabilities, resulting, for example, in less engagement in Sprint Reviews. Metaphorically speaking, the Scrum team moves from leaping forward to walking steadily.


Edge computing: 5 must-have career skills

When the networking is good enough, then it’s good enough – remote compute and storage isn’t as pressing of a need because the data is going back to a centralized environment. But when low latency is a critical requirement – and that’s one of the essential purposes for edge computing – organizations will need to be able to deliver the necessary infrastructure resources on site. AI workloads with large datasets, for example, or applications that require near-real-time feedback loops may be better served on-site – which means they need compute, storage, and other resources to run properly. Nelson notes that edge infrastructure can require capabilities that fall outside of the typical data center or cloud engineer’s experience. “Managing edge compute at scale can be very different than traditional data center management,” Nelson. “Thousands of devices across hundreds of sites with little to no onsite staff can be daunting.” As Haff from Red Hat notes, you can’t usually send a help desk pro out to the edge every time one of those devices needs maintenance.


Younger workers want training, flexibility, and transparency

Regardless of the root causes, younger workers are sending a clear signal to leaders that they want more training and development, particularly in newer technical and digital skills. And given the (likely permanent) shift to hybrid work, they will need more career guidance, coaching, and mentoring—all of which will require companies to rethink how they can best support remote and hybrid workers and foster a strong learning culture through both in-person and virtual channels. Notably, technical skills are teachable. But softer skills like collaboration, communication, and conflict resolution often are not—and the younger workforce will need to learn these firsthand, through interactions with colleagues. The flow of skills can also move in the other direction, through reverse-mentoring initiatives that empower younger workers to partner with executives and guide them in areas such as technology or social issues. Reverse-mentoring gives younger workers a voice, builds relationships across generations, and sends a clear signal that younger workers have valuable contributions to offer.


The Next Evolution of Virtualization Infrastructure

For applications that were still tied to the data center, alternatives like OpenStack emerged to enable scale-out virtualization infrastructure for both private cloud and many public cloud Infrastructure as a Service deployments. Red Hat OpenStack continues to provide a leading distribution in this space, powering enterprise private cloud and telco 4G network function virtualization environments. The third age of virtualization was actually a move away from the hypervisor and traditional VMs: the age of containers. Just as virtualization had broken physical servers into many individual virtual servers each running their own OS, by leveraging the power of the hypervisor, so too had containerization divided a single Linux OS running on those virtual machines, or directly on bare metal servers, into even smaller application sandboxes with the power of namespaces, cgroups and the Docker packaging format, now standardized through the Open Container Initiative. This enabled pioneering developers to build and provision containerized microservices on their local machines and promote them to test, stage and production environments, consistently and on demand.



Quote for the day:

"Nobody in your organization will be able to sustain a level of motivation higher than you have as their leader." -- Danny Cox

Daily Tech Digest - October 31, 2022

Vaadin CEO: Developers are the architects of the future

Developer passion for a project is the best barometer of its utility and potential. Developers love that Vaadin provides components and tools that make it faster and easier to build modern web apps with a great UX. End users in the enterprise now expect an intuitive and pleasing user experience, just like they are used to in their personal lives as consumers. I’m excited that we are making it easy for developers to deliver a compelling UX for the Java-based applications powering the enterprise. ... Last year, the competition for tech talent was fierce, and many companies learned the hard way that if you’re not intentional about your culture, your talent can quickly find a new workplace that provides something that better meets their culture needs. Part of that is investing in understanding what developers do, understanding the technologies they use, listening to pain points, and smoothing the day-to-day path so they can do what they do best. You need to offer flexibility—not just in terms of work/life balance and autonomy, but also maintaining an agile enough environment to account for new tool preferences and process efficiencies as determined by the developers.


Raising the resilience of your organization

Rather than continually tell teams what to do, leaders in resilient organizations minimize bureaucracies and foster entrepreneurship among and within teams. They nearly always put decision making in the hands of small cross-functional teams, as far from the center and as close to the customer as possible. They clarify the team’s and the organization’s purpose, provide some guardrails, and ensure accountability and alignment—but then they step back and let employees take the lead. The Disney theme parks provide a good example: every employee is dubbed a cast member, and their clear objective is to create “amazing guest experiences” within a set of guardrails that includes, among other responsibilities, ensuring visitor safety and fostering family-friendly behaviors. ... Another characteristic of resilient organizations is their ability to break down silos and use “tiger teams” to tackle big business problems. These are groups of experts from various parts of an organization who come together temporarily to focus on a specific issue and then, once the issue is addressed, go back to their respective domains.


Government ups cyber support for elderly, vulnerable web users

The Department for Digital, Culture, Media and Sport (DCMS) said research had shown many people struggle to engage and benefit from the range of digital media literacy education that is available for reasons such as limited experience or lack of confidence in going online, lack of awareness of how to access such education, and lack of availability of same. It created the Media Literacy Taskforce Fund earlier this year as one of two funding schemes pitched at targeting hard-to-reach or vulnerable groups through community-led projects. The other scheme, the Media Literacy Programme Fund, is set to deliver training courses, online learning, tech solutions and mentoring schemes to vulnerable web users. Grant recipients from the first fund include: Fresherb, a social enterprise working with young people to develop podcasts – aired on local radio stations – that explore issues around online dis- and misinformation; Internet Matters, a Manchester-based charity providing media literacy training for care workers and school leavers


Dell, AMD, IBM, and Strangeworks Dig into Quantum’s Future

Besides simply getting quantum computers to work, there are many questions about what financial applications will be able to actually do better on quantum systems than classical resources. Optimization is often touted, but perhaps wrongly so, said IBM’s Prabhakar. “There are basically three areas which are interesting: there’s simulating nature, there is optimization, and then there is machine learning,” said Prabhakar. “Initially, when we started working with banks, right – JPMC is a client of ours as is Goldman Sachs and others – they started looking at optimization. But it was very clear that there are classical methods which are actually advancing as quickly in optimization. So, if you’d asked me three years ago, my answer would have been optimization. Now we are realizing it’s not optimization, it’s probably machine learning-based analysis, which are going to have first early use cases. “If you look at the work our clients are doing, you will see that a lot of them are actually not making the decision right now on which of these three areas they want to work on.


Tech in turmoil: Talent disruption in India’s IT sector and the ‘M’ word

The issue of ‘moonlighting’ has been doing the rounds of late. It got wider attention after Wipro chief Rishad Premji equated it to cheating. Several other companies have also raised their concern, with a few of them even sacking employees for moonlighting. Wipro recently fired 300 employees for moonlighting. Additionally, it has announced to open offices four days a week with employees needing to attend office physically at least three days in a week. This was done to adopt a flexible approach to make teams experience and build meaningful relationships at work. CN Ashwath Narayan, the IT minister of Karnataka, asked those who moonlight to leave the state, saying freelancing beyond office hours is “literally cheating”. IT and tech giant IBM, too, sent a strong note to its employees over moonlighting. In a note to employees, Sandip Patel, India and South Asia head of IBM, wrote: “A second job could be full time, part time or contractual in nature but at its core is a failure to comply with employment obligations and a potential conflict of interest with IBM’s interest.”


Cyber Ratings as Measures of Digital Trust

Companies have added cyber reputation management practices to their cybersecurity organizations to manage these public cyber ratings. Several firms provide services in the security rating category, each with its own models and algorithms. Cybersecurity teams subscribe to one or more of these services to manage the data used in such ratings. They also subscribe to monitor their ever-growing list of third parties and look for weaknesses that might bring about cyber incidents. Proxy advisors use these same rating services to supplement financial data in annual proxy statements. Other use cases include cyber insurance underwriters looking for evidence-based reasons to reject applications and assist clients in improving their control environments. At the core, such practices increase trust in the digital economy. Credit ratings are an apt model for implementing trust models. Like corporate credit ratings, they can be done with or without involvement from the rated entity. They can also be made available to the public (like a security rating). In-depth analyses can also be done and shared privately (think of this as a pen test or security assessment).


Introduction to Cloud Custodian

By automating a lot of the tedious policy management away, Cloud Custodian could reduce risk and accidents through more streamlined cloud governance. “It solves the natural problems when infrastructure is in everyone’s head,” says Thangavelu. By aggregating ad-hoc scripts and unifying policies across an organization, you could immediately instigate new rules without manually reminding all members of an organization, which could take years. For those familiar with Open Policy Agent (OPA), you may notice some overlap in the objectives, as both are engines for enacting cloud-native policies. Compared to OPA, Cloud Custodian has some developer experience perks. For one, you don’t have to use Rego, as the policies are written in YAML, which is a familiar configuration language for DevOps engineers. Cloud Custodian uses abstractions on event runtimes for each cloud provider. Furthermore, compared to OPA, you don’t need to bind the engine for a particular problem domain, says Thangavelu, as Cloud Custodian is specifically bounded to cloud governance and management.


Data Security Takes Front Seat In Industrial IoT Design

Even apparently mundane applications, such as smart home utility meters, are targets of opportunity for thieves looking to steal power from the grid. Thankfully, data breaches have appeared in the headlines often enough that customers are awakening to the need for security as a core component of their technology solutions. Increasingly, my group is engaging earlier in the design process to help customers better understand how to adequately provision and scale their devices with a combination of hardware-accelerated cryptography, secure key storage, and some form of physical protection. ... It is important to understand which tools to use and when, starting with the basic building blocks and moving on to complete solutions. It’s the equivalent of buying a four-digit combination lock from the hardware store. They all come set to “zero,” and we help customers find the best way to program the lock. As the world’s devices, systems, appliances and IIoT networks adopt more technology layers, it’s crucial to ensure that, as these building blocks are assembled, their attack surface is as small as possible.


The Challenges of Ethical Data Stewardship

A real challenge for businesses is how to manage third-party data. It often has no control over how that data was gathered or how its partners use the data it provides. Another challenge that is often overlooked is data that comes as part of an acquisition of a company. How do you extend ethical controls to that? Langhorne believes that Precisely is aligned in thinking about these different aspects, needs and requirements. Although she is new to the company, she sees Precisely’s privacy journey continuing to mature. In the short term, she says, “we have principles that we follow and compliance expectations. In considering the suppliers of our data, we do due diligence. We also have contracts with obligations, and we are very concerned about the quality of our data because that is part of our brand promise. It is about data integrity, and that means having data you can trust. These same principles extend to our M&A activity, an area we have a lot of experience in having acquired seven businesses in the past two years! 


Technology Feeds Sustainable Agriculture

For now, Ganesan says, the biggest problem is the lack of a holistic loop to address sustainability. While farmers are open to innovation, particularly through technology, they often have difficulty embracing sustainability because their suppliers -- which sell seeds, chemicals, and other products -- are slow to adopt lower carbon practices. “We’re seeing progress but it’s still a bit of the wild, wild west,” he says. “In many cases, there’s a lack of coordination within industries and by governments.” There are some encouraging signs, however. For instance, in February 2022, the United States Department of Agriculture announced that it is investing US $1 billion in companies in order to reduce greenhouse gas emissions and fuel innovation related to climate-based technologies. Ganesan believes that the agricultural industry has only scratched the surface about what’s possible with precision technology and analytics. “A key is to develop technologies that not only drive improvements but also create value for farmers. This will accelerate sustainability,” he notes.



Quote for the day:

"Don't be buffaloed by experts and elites. Experts often possess more data than judgement." -- Colin Powell

Daily Tech Digest - October 30, 2022

FCA examining big tech disruption of financial services

Warnings of financial services sector competition being harmed have been expressed by the UK regulator, as Amazon, Apple, Google and Meta look to continue innovating in the industry, reports the Financial Times. The FCA will ask the corporations — which all hold FCA permits for payment processing in the UK — for their perspectives on how Silicon Valley could expand into payments, deposits, credit and insurance. All four companies hold payment action permits, with Amazon and Apple also having some permissions regarding consumer credit and insurance. While the watchdog acknowledges that big tech involvement of financial services would bring “increased efficiency” and “healthy competition” in the short term, it states that this could lead to longer term exploitation of ecosystems and data stores, to “lock consumers in”. The body has also suggested that tech companies generally should share customer data with traditional financial service institutions.


The ins and outs of migrating SQL Server to the cloud

A homogeneous migration between an on-prem version of SQL Server and the RDS equivalent can actually be relatively easy, says Ayodele. The only change necessary is an alteration of the system schema. RDS has built-in stored procedures for management purposes that are not in the on-prem SQL Server engine. So, customers can simply migrate the database itself to avoid corrupting the RDS system schema. The next step will be to use native tools or AWS Database Migration Service (AWS DMS) to port the data across from the source to the destination. With AWS DMS, the source database remains operational during this process to minimize downtime. DMS can use change data capture (CDC) technology to keep track of ongoing changes in the source database during migration. Once the migration is done, the final steps will be to run the RDS version as a replica and then switch over to the RDS primary database instance when ready. There are some best practices that customers should follow when migrating, Ayodele says. 


Big 3 Public Cloud Providers Highlight Cost Control Capabilities

"The long-term trends that are driving cloud adoption continue to play an even stronger role during uncertain macroeconomic times," Pichai said. "As companies globally are looking to drive efficiencies, Google Cloud's open infrastructure creates a valuable pathway to reduce IT costs and modernize." Using the cloud to do more with less was also a theme echoed by Microsoft CEO Satya Nadella during his company's earnings call. Moving to the cloud helps organizations align their spend with demand and mitigate risk around increasing energy costs and supply chain constraints, Nadella said. Microsoft is also very optimistic about the growth of hybrid cloud in addition to public cloud services. Nadella said that Microsoft now has more than 8,500 customers for its Azure Arc technology, which is more than double the number a year ago. "We're also seeing more customers turn to us to build and innovate with infrastructure they already have," he said. "With Azure Arc, organizations like Wells Fargo can run Azure services, including containerized applications across on-premises, edge, and multicloud environments."


Delivering visibility requires a new approach for SecOps

The biggest challenge most organizations face when operationalizing these frameworks is the fact that the data/information they need is siloed across multiple data systems and cybersecurity tools. Security data lives in multiple places, with organizations using a variety of data logging systems like Splunk, Snowflake, or other data lakes as the foundation for threat hunting and research. The security operations center (SOC) will layer on platforms for Security Information and Event Management (SIEM), Extended Detection and Response (XDR), and other tools on top of these data lakes to help analyze data and correlate events (e.g., Crowdstrike Falcon Data Replicator or email security systems, such as Proofpoint or Tessian). Security analysts can spend hours exporting data from these systems, tagging and normalizing the information, and ingesting that data into their SIEMs and SOARs before they can begin to detect, hunt, triage, and respond to threats. But next-gen tools are being developed that address this exact issue with automation and machine-learning.


The growing role of audit in ESG information integrity and assurance

Like audits of financial statements and the internal control over financial reporting, third-party assurance enhances the reliability of ESG information and builds confidence among stakeholders. To do this, auditors conduct attestation engagements to provide assurance that ESG information is presented in accordance with certain criteria. “We help management and the board feel confident in the reported ESG information, which is important given the increased focus and attention from external stakeholders,” explains Whittaker. More specifically, ESG assurance obtained from a certified public accountant, “involves the evaluation of processes, systems, and data, as appropriate, and then assessing the findings in order to support an opinion based on an examination [reasonable assurance] or conclusion based on a review [limited assurance],” according to the Center for Audit Quality. Because companies are at different stages of their sustainability journeys, the breadth of ESG assurance engagements is vast.


If you’re going to build something from scratch, this might be as good a time as in a decade

the environment for launching a start-up was really crazy the past five years. And the truth is that if you’re going to build something from scratch, this might be as good a time as you’ve had in a decade. Real estate? You can get all the real estate you want. People used to fret about lease cost, but that’s all gone. And while people get caught up on whether the money’s cheap or not, getting rid of the distraction of all that cheap money may be a good thing. That whole mentality of, oh, your competitor raised $100 million, now you have to raise $100 million. All those things have evaporated—for the better, I’d say. A huge thing is that your access to talent is way better. It was so hard to get, but now it’s a lot cheaper than it was. There are layoffs happening. And then hybrid has opened up the people you can get. I’ve heard some pretty amazing stories. Jennifer Tejada, who runs PagerDuty, says they went into the pandemic at 85 percent Bay Area employees and came out at 25 percent. 


Good Governance: 9 Principles to Set Your Organization up for Success

Good corporate governance requires that records and processes are transparent and available to shareholders and stakeholders. Financial records should not be inflated or exaggerated. Reporting should be presented to shareholders and stakeholders in ways that enable them to understand and interpret the findings. Transparency means that stakeholders should be informed of key corporate contacts and told who can answer questions and explain reports, if necessary. Corporations should provide enough information in their reports so that readers get a complete view of the issues. ... All too often, the corporate world’s focus can be taken up by sudden crises and controversies. A timely response to the unexpected is crucial, with corporations that practice good governance usually able to prioritize swift and honest communication with shareholders and stakeholders. ... Many corporations also consider the environmental impact as they perform their duties and responsibilities. 


How a Marketing Tool is Becoming the Healthcare Industry’s Security Nightmare

“Consumer activity tracking for the purpose of marketing is not a fit for the health sector,” Mike Hamilton, CISO of cybersecurity firm Critical Insight and former CISO for the city of Seattle, argues. “Because of regulatory oversight by the [US] Department of Health and Human Services, as well as the privacy statutes coming out of states, like the California Consumer Privacy Act, this is not information that is germane to the health sector mission, and its possession creates significant liability.” ... “This could have been prevented by the use of other analytic tools to understand patient usage rather than a marketing technique that is designed to gather and share so much information that is outside the scope of the intended purpose.” “At least dozens of the nation's top hospitals use tracking pixels for millions of patients. That may be changing fast due to new laws and lawsuits that will force organizations to change course drastically,” Paul Innella, CEO of TDI, a global cybersecurity company in the banking and healthcare spaces, tells InformationWeek.


Cranefly Cyberspy Group Spawns Unique ISS Technique

ISS logs record data such as webpages visited and apps used. The Cranefly attackers are sending commands to a compromised Web server by disguising them as Web access requests; IIS logs them as normal traffic, but the dropper can read them as commands, if they contain the strings Wrde, Exco, or Cllo, which don't normally appear in IIS log files. "These appear to be used for malicious HTTP request parsing by Geppei — the presence of these strings prompts the dropper to carry out activity on a machine," Gorman notes. "It is a very stealthy way for attackers to send these commands." The commands contain malicious encoded .ashx files, and these files are saved to an arbitrary folder determined by the command parameter and they run as backdoors (i.e., ReGeorg or Danfuan). Gorman explains that the technique of reading commands from IIS logs could in theory be used to deliver different types of malware if leveraged by threat actors with different goals.


Mitigating the risks of artificial intelligence compromise

The fundamental actions required from any security approach is to protect, detect, attest, and recover from any modifications to coding, whether malicious or otherwise. The best way to fully secure a compromised AI is applying a “trusted computing” model that covers all four AI elements. Starting with the data set aspect of a system, a component such as a Trusted Platform Module (TPM) is able to sign and verify that any data provided to the machine has been communicated from a reliable source. A TPM can ensure the safeguarding of any algorithms used within an AI system. The TPM provides hardened storage for platform or software keys. These keys can then be used to protect and attest the algorithms. Furthermore, any deviations of the model, if bad or inaccurate data is supplied, can be prevented through applying trusted principles focusing on cyber resiliency, network security, sensor attestation, and identity.



Quote for the day:

"The very essence of leadership is that you have to have vision. You can't blow an uncertain trumpet." -- Theodore Hesburgh