Daily Tech Digest - April 21, 2022

7 ways to avoid a cloud misconfiguration attack

We tend to focus a lot on avoiding misconfiguration for individual cloud resources such as object storage services (e.g., Amazon S3, Azure Blob) and virtual networks (e.g., AWS VPC, Azure VNet), and it’s absolutely critical to do so. But it’s also important to recognize that cloud security hinges on identity. In the cloud, many services connect to each other via API calls, requiring IAM services for security rather than IP-based network rules, firewalls, etc. For instance, a connection from an AWS Lambda function to an Amazon S3 bucket is accomplished using a policy attached to a role that the Lambda function takes on—its service identity. IAM and similar services are complex and feature rich, and it’s easy to be overly permissive just to get things to work, which means that overly permissive (and often dangerous) IAM configurations are the norm. Cloud IAM is the new network, but because cloud IAM services are created and managed with configuration, cloud security is still all about configuration—and avoiding misconfiguration.


A.I. Is Mastering Language. Should We Trust What It Says?

So far, the experiments with large language models have been mostly that: experiments probing the model for signs of true intelligence, exploring its creative uses, exposing its biases. But the ultimate commercial potential is enormous. If the existing trajectory continues, software like GPT-3 could revolutionize how we search for information in the next few years. Today, if you have a complicated question about something — how to set up your home theater system, say, or what the options are for creating a 529 education fund for your children — you most likely type a few keywords into Google and then scan through a list of links or suggested videos on YouTube, skimming through everything to get to the exact information you seek. (Needless to say, you wouldn’t even think of asking Siri or Alexa to walk you through something this complex.) But if the GPT-3 true believers are correct, in the near future you’ll just ask an L.L.M. the question and get the answer fed back to you, cogently and accurately. Customer service could be utterly transformed: Any company with a product that currently requires a human tech-support team might be able to train an L.L.M. to replace them.


NSO Group faces court action after Pegasus spyware used against targets in UK

NSO argued in a response to the legal letters that UK courts have no jurisdiction over NSO, which is based in Israel, and that legal action is barred by “state immunity”. The company also argued that there was no proper basis for showing that NSO acted as a “data controller or a data processor” under UK data protection law. There is no basis to claim that NSO joined in a “common design” with Saudi Arabia or the UAE that would make it “jointly liable” with the two countries, it said. NSO said it provides surveillance software for the “exclusive use” of state governments and their intelligence services. It claimed to pride itself on being the only company in this field “operating under an ethical governance framework that is robust and transparent”. The company said it had policies in place to ensure its “products would not be used to violate human rights”. It claimed that the legal letters repeated “misinformation” from reports and statements by non-governmental organisations, including Citizen Lab, Amnesty International and Forbidden Stories.


IP addressing could support effective network security, but would it be worth it?

VPNs actually provide pretty good protection against outside intrusion, but they have one problem—the small sites. MPLS VPNs are expensive and not always available in remote locations. Those sites often have to use the internet, and that can mean exposing applications, which means increasing the risk of hacking. SD-WAN, by adding any site with internet access to the corporate VPN, reduces that risk. Or rather it reduces that particular risk. But hacking in from the outside isn’t the only risk. These days, most security problems come from malware planted on a computer inside the company. There, from a place that’s already on whatever VPN the company might use, the malware is free to work its evil will. One thing that can help is private IP addresses. We use private IP addresses literally every moment of every day, because virtually all home networking and a lot of branch-office networking are based on them. There are a series of IPv4 and IPv6 addresses set aside for use within private subnetworks, like your home. Within the private subnet, these addresses work like any IP address, but they can’t be routed on the internet.


CIOs increasingly tap contract IT as talent gap filler

CIOs also typically find they can more easily get workers with high-demand skills on a temporary basis, Mok says. Markow agrees, adding: “You can really bring in new skill sets and capabilities more quickly in some cases.” But CIOs must be careful not to misclassify workers as contract when they should be staff, a legal distinction that could run the company afoul of labor laws, Mok says. In addition to potentially misclassifying these workers, Mok says CIOs who use contract and freelance workers too heavily or for extremely long stretches are often also operating on a more reactionary versus strategic basis, which can translate to missed opportunities, higher costs, and poor morale. “Using contract workers can be more cost effective when you have ad-hoc needs that need to be addressed,” Markow adds. “But that said, it can be more costly if those projects run far longer than anticipated or roll into other needs or if there are unintended requirements that come about and those workers need to stay on.”


Cybersecurity litigation risks: 4 top concerns for CISOs

The risk of litigation is not limited to corporations. CISOs themselves face being subject to legal action for breach of duty where insufficient steps were taken to prevent a breach, or the aftermath of the breach was handled badly, says Simon Fawell, partner at Signature Litigation LLP. Jinivizian agrees: “The role of the CISO has never been more critical for mid/large enterprises, and potentially more in the crosshairs and held accountable for security incidents and data breaches, as illustrated by the ongoing class action against SolarWinds’ CISO and other executives following the devastating supply chain attack in 2020,” he states. This is also evidenced by the charges against Uber’s CSO for allegedly trying to cover up a ransomware payment relating to the 2016 attack that compromised data of millions of users and drivers, Armstrong adds. If a CISO acts as a company director, then they could face shareholder actions for breach of duty following data and privacy breaches based on damage to company value, says Fawell. “Shareholder actions against directors have been on the rise in the UK and, where a data breach has led to a drop in value for shareholders, claims against directors are increasingly being considered.


The Cybersecurity Threats Facing Smart Buildings

IoT devices are common appliances that you might even find around your home but that are connected to the internet. Examples of IoT devices include doorbell cameras, smart meters, fitness trackers, smart speakers, and connected cars. An unprotected device is like leaving the backdoor open or a key under the mat. There were even security doubts raised about whether Joe Biden could bring his Peloton bike into the White House when he became president. Ensuring that every single device connected in a smart building has adequate security is a must if companies want to avoid data breaches. While as much autonomy as possible is better for smart buildings, there ultimately must be some human input. Often, the people using the systems are the ones who can leave them the most vulnerable. Everyone makes mistakes, but when it comes to cybersecurity, one mistake is all it takes for a network to be breached and data to be mined. Human error in this instance might be accidentally downloading or clicking a link to malware, or using an old password and not changing it.


How IT departments enable analytics operations

Modern IT organizations need a mix of infrastructure and data skills because both enable an insight-driven enterprise. Infrastructure skills are necessary to achieve an architecture that can support data analytics requirements, including scalability, data governance and data security. Today, the building, operating and innovating of the value proposition using AI is getting simpler with the aid of advanced AI cloud platforms. "This means we will need business, product and technology teams who bring skills with deep experience in leveraging data and to offer comprehensive products and capabilities that are aligned to the business context," said Rizwan Akhtar, executive vice president and CTO of business technology at real estate services company Realogy Holdings. Fundamentally, IT needs to have stronger math skills, including linear algebra, statistics, calculus and maybe inferential geometry -- skills which data scientists tend to have. However, given the mainstream adoption of AI and machine learning (ML), there are now tools that make it easier for non-data scientists to do more.


EF Core 7 Finally Divorces Old .NET Framework

In fact, it was only last summer that we reported the dev team was still playing catch-up to the old version of EF (which stopped at version 6) in the article "EF Core 6 Dev Team Plays Catch-Up with EF6." However, while it plays catch-up, the team also introduces new goodies not to be found in the .NET Framework version. Microsoft guidance says: "EF Core has always supported many scenarios not covered by the legacy EF6 stack, as well as being generally much higher performing. However, EF6 has likewise supported scenarios not covered by EF Core. EF7 will add support for many of these scenarios, allowing more applications to port from legacy EF6 to EF7. At the same time, we are planning a comprehensive porting guide for applications moving from legacy EF6 to EF Core." And in the first preview release of EF Core 7.0 that was announced last week, an important milestone was reached. "EF7 will not run on .NET Framework," said Microsoft senior program manager Jeremy Likness in a Feb. 17 blog post.


Dynamic Value Stream Mapping to Help Increase Developer Productivity

In other industries such as manufacturing, supply chain, or distribution we can find a wide adoption of process analysis and lean practices for efficiency gain. Let’s consider Value Stream Mapping, the key practice facilitating process analysis and further improvement. A Value Stream Map depicts every step in the process and categorizes each step in terms of the value it adds and what value is wasted. Despite the fact that the software industry has been adopting lean principles [Lean Software Development Principles (Poppendieck and Poppendieck, 2003) and The Principles of Product Development Flow (Reinertsen, 2009)] the technique of value stream mapping has not gained a lot of traction in the software industry. So what prevents software engineering organizations from adopting value stream mapping as a foundation for software delivery capability optimization? Even though the practice has existed for a long time and is well known among agile and lean coaches, knowledge and adoption among enterprises are falling behind.



Quote for the day:

"Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has." -- Margaret Mead

Daily Tech Digest - April 20, 2022

Security-as-Code Gains More Support, but Still Nascent

"The great thing about security-as-code is that you know the configuration that you have deployed exactly corresponds to what you had specified and analyzed as meeting your security requirements," he says. "Many breaches out there are not necessarily the result of an unknown risk, but are usually the result of some control that the organization thought they had not being deployed and operating when they needed it the most." Security-as-code is an extension of the infrastructure-as-code movement that has come about as software-defined networks and systems have become more popular. DevOps teams have adopted infrastructure-as-code as the de facto standard for building and deploying software, containers, and virtual machines, but now companies are betting that the shift to cloud-native infrastructure will make security-as-code a key part of a sustainable approach to security. ... Google is joined by others blazing a trail into the security-as-code arena. The growing movement to encode security as a configuration file that can be incrementally improved led security firm Tenable Network Security to acquire Accurics, a maker of security-as-code technology.


The evolving role of the lawyer in cybersecurity

On the purely defensive side, advanced warning of a forthcoming attack can be the difference between a successful defensive posture or a damaging and costly incident. With such intelligence in hand, organizations can craft rules on an email gateway or firewall to effectively prevent attackers’ phishing email from reaching employee inboxes or to block the ability of an employee to navigate to a malicious link. Indeed, one of our attorney colleagues in New York used a bespoke threat intelligence system that he developed to identify and help to neutralize the domains that hosted a forthcoming cyberattack on the World Health Organization at the outset of the Coronavirus crisis in March 2020. That bespoke intelligence identified a domain and subdomain combination that allowed our colleague to (i) validate the data and establish that he had indeed identified an active threat to the WHO and (ii) communicate that data to trusted parties, including law enforcement, enabling defensive countermeasures to be put in place. This was a bright line example of how counsel can play a role in the critical day-to-day functions of security operations.


Disruptive Innovation: The emerging sectors applying digital technologies

Property technology, or Proptech, has been disrupting the real estate space, allowing for the buying, renting and selling of properties online. Like other industries on this list, Proptech companies have been vital for maintaining operations during lockdown as company headquarters closed their doors. Landlords and homeowners can securely list properties on an online market via a website or app, and in more recent times have been able to upload virtual viewings. Meanwhile, data analytics and algorithms powered by AI have allowed users to see what properties would be best for them. Additionally, platforms such as PlanetRent offer centralised portals for hosting relevant documents and details in one place. Proptech, and indeed real estate, also encompasses offices, and demand for companies in this area of the sector have been in high demand due to the need for organisations to move to smaller spaces, or leave the office altogether. Here, users can view office listings and make transactions online.


Flexible return-to-office policies are hammering employee experiences

The latest report observed that just more than a third of knowledge workers (34%) have reverted to working from the office five days a week, the greatest share since it began surveying in June 2020, but as this was happening, employee experience scores were plummeting for knowledge workers asked to return to the office full-time and for those who do not have the flexibility to set their own work schedules. This includes 28% worse scores on work-related stress and anxiety and 17% worse scores on work-life balance compared with the previous quarter. Moreover, the study warned there were signs that employers will pay a price for this discontent: workers who say they are unsatisfied with their current level of flexibility – both in where and when they work – were now three times as likely to look for a new job in the coming year. The data showed that non-executives were facing far more strain during the return-to-office era than leaders in the C-suite, further widening an existing executive-employee disconnect on key job satisfaction measures revealed in October 2021. 


IoT Is a Breakthrough for Automation Systems, But What About Security?

Since the IoT and smart devices collect heaps of consumer data stored over the cloud and are managed in real-time, leveraging the traditional security tools and practices to ensure security doesn’t make any sense. As far as the automation industry is concerned, providing real-time access to applications and devices and ensuring robust security without human intervention becomes an uphill battle. Moreover, cybercriminals are always looking for a loophole in the networks that provide frequent access to resources, devices, and applications that they can exploit to sneak into a network. Hence, stringent security mechanisms become crucial for businesses leveraging IoT to automate their processes. Here’s where the role of a centralized data infrastructure seamlessly routes all IoT devices through an API that further offers insights regarding machine-to-machine access and user access in real-time. Through zero-trust principles and machine-to-machine access management, businesses can deliver a seamless, secure, and monitored experience to their customers without compromising their identities and personal information.


Interim CIOs Favored as Organizations Seek Digitalization Push

This is happening because of an overall talent shortage juxtaposed with talent growth in the on-demand talent pool, coupled with more demand for the CIO role because of digital transformation. “Companies need more from the CIO; it’s harder to get this talent, they need to move faster, and there's fantastic talent that can work remotely,” he explained. Neil Price, head of practice, CIO and executive technology leadership for Harvey Nash Group, pointed out the concept of an interim CIO is not new. However, it is becoming ever more attractive due to the volume of technology transformation that businesses are looking to execute to keep pace in the post-pandemic digital environment. “They need accelerated change, and for this, an interim CIO who comes in with a very specific brief and a clear deadline or end-date can bring the concentrated focus and impetus needed,” he said. “It’s easier to deliver change with a fresh set of eyes, and this is something an interim CIO offers.” He added the concept of an interim CIO can apply equally across all types of business, and any organization that wants to change at pace can see the benefits, whether it’s a small or medium-sized business or a large enterprise.


Artificial Intelligence (AI) strategy: 4 priorities for CIOs

One of the most critical priorities is identifying high-impact areas with opportunities to embed AI-based, real-time decisions in business processes. The ability to process contextual information in real time to make on-the-fly decisions is a powerful way to differentiate products, services, and experiences in the crowded marketplace. For example, insurance firms can automate claims processing for real-time approvals based on pictures and videos provided by the claimant right from the place and time of the incident. Lenders can analyze risks in real time based on collateral and background information to offer on-the-spot loan approvals. Organizations can personalize and customize products and services across a broad array of use cases through the judicious injection of AI in their business processes. ... Since AI engineering differs from “traditional” software engineering, CIOs must establish a strategy to institutionalize AI and ML methodologies. Many enterprises have found that the most effective way to do this is to establish a robust platform supported by a governance model.


Securely Scaling the Myriad APIs in Real-World Backend Platforms

Most real-world software platforms will also interact with external APIs from business partners or third-party providers. A good authorization server and an understanding of OAuth standards will also improve your capabilities for this type of “federation.” One interesting use case is shown below, where a partner user is authorized to sign in to the company’s app. The partner authorization server could then act as an identity provider to authenticate the user according to the partner’s security policy and with familiar credentials. An embedded token from the business partner could then be used by the company’s APIs to call partner APIs. ... JWT libraries will allow APIs to use a clock skew when validating access tokens, and it is possible to configure this slightly higher for downstream APIs. It is not recommended to rely on expiry times, however, since there can be multiple reasons for a JWT to fail validation. In some setups, this might be caused by an infrastructure event such as a load balancing failover. Instead, it is recommended to implement standard expiry handling. In this case, retrying from the client is often the most resilient option.


AWS Serverless Lambda Resiliency: Part 1

This is critical as serverless Lambda functions are charged based on Memory limits (which determines the CPU allocation) along with the duration for which these functions are invoked. Without appropriate resiliency, our Lambda services could execute for a longer duration than required, our Lambda could overwhelm the backend services when they are having issues and not available or being in a degraded state, and client experiences that invoke our Lambda functions do not get an immediate fallback response. ... AWS Lambda Serverless capabilities themselves have multiple use cases when invoked synchronously or asynchronously and hence the context of how they can be made resilient can differ between the different invocations. The approaches change based on whether the solution is deployed in a single region vs. multiple regions. There is also a dependency on where the provider service is deployed, e.g., on AWS (same or another region) or outside AWS. We will identify ways to ensure that warm start Lambdas are not carrying forward issues that happened during cold start initialization. The overall objective is to reduce Lambda functions' execution time/memory consumption by optimizing them before deployment.


Cross-platform UIs ‘go live’ with .NET MAUI

You can best think of MAUI as a way to unify the various platform-specific .NET APIs so that C# and XAML code can be written once and run everywhere, with the option of providing platform-specific code to avoid a lowest common denominator approach. MAUI sits above both native code and the common base-class libraries. Your code calls MAUI APIs, which then call the requisite platform APIs. If you prefer to have native-specific features, you can call platform APIs directly if they don’t have MAUI coverage. This approach gives you a base set of common controls, much like those used by Xamarin Forms, with a layout engine that allows UI code to scale between different device form factors and screen sizes. It’s important to be aware of the capabilities of your target devices and, at the same time, come up with UI designs that can support the shift between landscape PC and Mac experiences and portrait mobile screens. Much of MAUI is the familiar XAML design experience, with a page description and code-behind to manage interactions with the rest of your application, as well as a canvas for displaying and interacting with custom graphic elements.



Quote for the day:

"The great leaders are like best conductors. They reach beyond the notes to reach the magic in the players." -- Blaine Lee

Daily Tech Digest - April 19, 2022

So you're thinking about migrating to Linux? Here's what you need to know

The Linux desktop is so easy. It really is. Developers and designers of most distributions have gone out of their way to ensure the desktop operating system is easy to use. During those early years of using Linux, the command line was an absolute necessity. Today? Not so much. In fact, Linux has become so easy and user-friendly, that you could go your entire career on the desktop and never touch the terminal window. That's right, Linux of today is all about the GUI and the GUIs are good. If you can use macOS or Windows, you can use Linux. It doesn't matter how skilled you are with a computer, Linux is a viable option. In fact, I'd go so far to say that the less skill you have with a computer the better off you are with Linux. Why? Linux is far less "breakable" than Windows. You really need to know what you're doing to break a Linux system. One very quick way to start an argument within the Linux community is to say Linux isn't just a kernel. In a similar vein, a very quick way to confuse a new user is to tell them that Linux is only the kernel. ... Yes, Linux uses the Linux kernel. All operating systems have a kernel, but you don't ever hear Windows or macOS users talk about which kernel they use.


Purpose is a two-way street

There’s a broader redefinition of purpose that’s underway both for organizations and individuals. Today, people don’t have just one single career in a lifetime but five or six—and their goals and purpose vary at each stage. At the same time, organizations can’t address or engage with the broad range of stakeholders they deal with through just one single purpose. In combination, these shifts are ushering in the concept of purpose as a “cluster” of goals and experiences, with different aspects resonating with different stakeholders at different times. The same cluster concept holds true for career paths. It is vital to expand the conversation about the varied, unique options people have to fulfill their goals. Companies must strive to make those options more transparent, more individualized, and more flexible, and less linear. For today’s employees, the point of a career path is not necessarily to climb a ladder with a particular end-state in mind but to gain experience and pursue the individual’s purpose—a purpose that may shift and evolve over time. To that end, it may make sense for organizations to create paths that allow employees to move within and across, and even outside, an organization—not just up—to achieve their goals.


How algorithmic automation could manage workers ethically

Mewies says bias in automated systems generates significant risks for employers that use them to select people for jobs or promotion, because it may contravene anti-discrimination law. For projects involving systemic or potentially harmful processing of personal data, organisations have to carry out a privacy impact assessment, she says. “You have to satisfy yourself that where you were using algorithms and artificial intelligence in that way, there was going to be no adverse impact on individuals.” But even when not required, undertaking a privacy impact assessment is a good idea, says Mewies, adding: “If there was any follow-up criticism of how a technology had been deployed, you would have some evidence that you had taken steps to ensure transparency and fairness.” ... Antony Heljula, innovation director at Chesterfield-based data science consultancy Peak Indicators, says data models can exclude sensitive attributes such as race, but this is far from foolproof, as Amazon showed a few years ago when it built an AI CV-rating system trained on a decade of applications, to find that it discriminated against women.


The changing role of the CCO: Champion of innovation and business continuity

The best CCOs partner with the business to really understand how to place gates and controls that mitigate risk, while still allowing the business to operate at maximum efficiency. One area of the business that is particularly valuable is the IT department, which can help CCOs to maintain and provide systematic proof of both adherence to internal policies and the external laws, guidelines or regulations imposed upon the company. By having a dedicated IT resource, CCOs do not have to wait for the next programme increment (PI), sprint planning or IT resourcing availability. Instead, they can be agile and proactive when it comes to meeting business growth and revenue objectives. Technical resourcing can be utilised for project governance, systems review, data science, AML and operational analytics, as well as support audit / reporting with internal / external stakeholders, investors, regulators, creditors and partners. Ultimately this partnership between IT and CCOs will allow a business to make data-driven decisions that meet compliance as well corporate growth mandates.


IT Admins Need a Vacation

An unhappy sysadmin can breed apathy, and an apathetic attitude is especially problematic when sysadmins are responsible for cybersecurity. Even in organizations where cybersecurity and IT are separate,sysadmins affect cybersecurity in some way, whether it’s through patching, performing data backups, or reviewing logs. This problem is industry-wide, and it will take more than just one person to solve it, but I’m in a unique position to talk about it. I’ve held sysadmin roles, and I’m the co-founder and CTO of a threat detection and response company in which I oversee technical operations. One of my top priorities is building solutions that won’t tip over and require significant on-call support. The tendency to paper over a problem with human effort 24/7 is a tragedy in the IT space and should be solved with technology wherever possible. As someone who manages employees that are on-call and is still on-call, I need to be in tune with the mental health of my team members and support them to prevent burnout. I need to advocate for my employees to be compensated generously and appreciate and reward them for a job well done.


The steady march of general-purpose databases

Brian Goetz has a funny way of explaining the phenomenon, called Goetz’s Law: “Every declarative language slowly slides towards being a terrible general-purpose language.” Perhaps a more useful explanation comes from Stephen Kell who argues that “the endurance of C is down to its extreme openness to interaction with other systems via foreign memory, FFI, dynamic linking, etc.” In other words, C endures because it takes on more functionality, allowing developers to use it for more tasks. That’s good, but I like Timothy Wolodzko’s explanation even more: “As an industry, we're biased toward general-purpose tools [because it’s] easier to hire devs, they are already widely adopted (because being general purpose), often have better documentation, are better maintained, and can be expected to live longer.” Some of this merely describes the results of network effects, but how general purpose enables those network effects is the more interesting observation. Similarly, one commenter on Bernhardsson’s post suggests, “It's not about general versus specialized” but rather “about what tool has the ability to evolve.


Open-Source NLP Is A Gift From God For Tech Start-ups

As of late, be that as it may, open exploration endeavours like Eleuther AI have brought the boundaries down to the section. The grassroots agency of man-made intelligence analysis, Eleuther AI expects to ultimately convey the code and datasets expected to run a model comparable (however not indistinguishable) to GPT-3. The group has proactively delivered a dataset called ‘The Heap’ that is intended to prepare enormous language models to finish the text and compose code, and that’s just the beginning. (It just so happens, that Megatron 530B was designed along the lines of The Heap.) And in June, Eleuther AI made accessible under the Apache 2.0 permit GPT-Neo and its replacement, GPT-J, a language model that performs almost comparable to an identical estimated GPT-3 model. One of the new companies serving Eleuther AI’s models as assistance is NLP Cloud, which was established a year prior by Julien Salinas, a previous programmer at Hunter.io and the organizer of cash loaning administration StudyLink.fr. 


SQL and Complex Queries Are Needed for Real-Time Analytics

While taking the NoSQL road is possible, it’s cumbersome and slow. Take an individual applying for a mortgage. To analyze their creditworthiness, you would create a data application that crunches data, such as the person’s credit history, outstanding loans and repayment history. To do so, you would need to combine several tables of data, some of which might be normalized, some of which are not. You might also analyze current and historical mortgage rates to determine what rate to offer. With SQL, you could simply join tables of credit histories and loan payments together and aggregate large-scale historic data sets, such as daily mortgage rates. However, using something like Python or Java to manually recreate the joins and aggregations would multiply the lines of code in your application by tens or even a hundred compared to SQL. More application code not only takes more time to create, but it almost always results in slower queries. Without access to a SQL-based query optimizer, accelerating queries is difficult and time-consuming because there is no demarcation between the business logic in the application and the query-based data access paths used by the application.


Lack of expertise hurting UK government’s cyber preparedness

In France, security pros tended to find tender and bidding processes more of an issue, but also cited a lack of trusted partners, budget, and ignorance of cyber among organisational leadership. German responders also faced problems with tendering, and similar problems to both the British and French. From a technological perspective, UK-based respondents cited endpoint detection and response (EDR) and extended detection and response (XDR) and cloud security modernisation as the most mature defensive solutions, with 37% saying they were “fully deployed” in this area. Zero trust tailed with 32%, and multi-factor authentication (MFA) was cited by 31% – Brits tended to think MFA was more difficult than average to implement, as well. The French, on the other hand, are doing much better on MFA, with 47% of respondents claiming full deployment, 35% saying they had fully deployed EDR-XDR, and 33% and 30% saying they had fully implemented cloud security modernisation and zero trust respectively. In contrast to this, the Germans tended to be better on cloud security modernisation, which 40% claimed to have fully implemented, followed by zero trust at 32%, MFA at 30% and EDR-XDR at 27%.


Scrum Master Anti-Patterns

The reasons Scrum Masters violate the spirit of the Scrum Guide are multi-faceted. They run from ill-suited personal traits to pursuing their agendas to frustration with the Scrum team. Some often-observed reasons are: Ignorance or laziness: One size of Scrum fits every team. Your Scrum Master learned the trade in a specific context and is now rolling out precisely this pattern in whatever organization they are active, no matter the context. Why go through the hassle of teaching, coaching, and mentoring if you can shoehorn the “right way” directly into the Scrum team?; Lack of patience: Patience is a critical resource that a successful Scrum Master needs to field in abundance. But, of course, there is no fun in readdressing the same issue several times, rephrasing it probably, if the solution is so obvious—from the Scrum Master’s perspective. So, why not tell them how to do it ‘right’ all the time, thus becoming more efficient? Too bad that Scrum cannot be pushed but needs to be pulled—that’s the essence of self-management; Dogmatism: Some Scrum Masters believe in applying the Scrum Guide literally, which unavoidably will cause friction as Scrum is a framework, not a methodology.



Quote for the day:

"No organization should be allowed near disaster unless they are willing to cooperate with some level of established leadership." -- Irwin Redlener

Daily Tech Digest - April 18, 2022

Which Computational Universe Do We Live In?

Unfortunately, we don’t know whether secure cryptography truly exists. Over millennia, people have created ciphers that seemed unbreakable right until they were broken. Today, our internet transactions and state secrets are guarded by encryption methods that seem secure but could conceivably fail at any moment. To create a truly secure (and permanent) encryption method, we need a computational problem that’s hard enough to create a provably insurmountable barrier for adversaries. We know of many computational problems that seem hard, but maybe we just haven’t been clever enough to solve them. Or maybe some of them are hard, but their hardness isn’t of a kind that lends itself to secure encryption. Fundamentally, cryptographers wonder: Is there enough hardness in the universe to make cryptography possible? ... Most cryptographers, Ishai said, believe that at least some cryptography does exist, so we likely live in Cryptomania or Minicrypt. But they don’t expect a proof of this anytime soon. Such a proof would require ruling out the other three worlds — and ruling out Algorithmica alone already requires solving the “P versus NP” problem, which computer scientists have struggled with for decades.


The AI in a jar

Chomsky sparked a reorientation of psychology toward the brain dubbed the cognitive revolution. The revolution produced modern cognitive science, and functionalism became the new dominant theory of the mind. Functionalism views intelligence (i.e., mental phenomenon) as the brain’s functional organization where individuated functions like language and vision are understood by their causal roles. Unlike behaviorism, functionalism focuses on what the brain does and where brain function happens. However, functionalism is not interested in how something works or if it is made of the same material. It doesn’t care if the thing that thinks is a brain or if that brain has a body. If it functions like intelligence, it is intelligent like anything that tells time is a clock. It doesn’t matter what the clock is made of as long as it keeps time. ... Unfortunately, functions do not think. They are aspects of thought. The issue with functionalism—aside from the reductionism that results from treating thinking as a collection of functions (and humans as brains)—is that it ignores thinking. 


Microsoft’s Newest AI technology, “PeopleLens,” is Helping Blind People See

PeopleLens was developed over two years by a team of Microsoft engineers and computer scientists. The aim was to create a machine learning system to help blind people navigate their social surroundings by identifying people and objects in photos. The team used a dataset of images annotated with labels indicating the presence of people and objects. They then used deep learning algorithms to train a computer vision model that could identify these labels in new images. ... The system uses computer vision algorithms to help the blind person understand their social surroundings. PeopleLens firstly identifies people in a scene and then provides information about them, such as their name and position. The PeopleLens platform consists of a wearable device and a cloud-based service. The device captures images of the surrounding environment and sends them to the cloud-based service, where they are processed by the machine learning algorithms. This information is then used to generate descriptions of the surrounding environment sent back to the wearable device.


Sustaining Fast Flow with Socio-Technical Thinking

If we shape the domain boundaries right, groups of related business concepts that change together will belong together and there will be fewer social and technical dependencies. Shaping good domain boundaries isn’t always a trivial task. When you stay high-level, you can easily fool yourself into thinking something is a sensible domain like the “customer domain” (this is usually something which connects to everything about the customer and results in a very tightly coupled system). I recommend using techniques like Event Storming and Value Stream Mapping to really get into the details of how your business works before attempting to define domain boundaries. Event Storming is a technique where you map out user journeys and business processes using sticky-notes. There aren’t too many rules, it’s a lo-fi technique which increases participation due to a very small learning curve. There is one rule though: processes are mapped out using domain events which represent something happening in the domain and are phrased in past tense, for example, ETA Calculated, Order Placed, Claim Rejected, and so on.


How to minimize new technical debt

John Kodumal, CTO and cofounder of LaunchDarkly, says, “Technical debt is inevitable in software development, but you can combat it by being proactive: establishing policy, convention, and processes to amortize the cost of reducing debt over time. This is much healthier than stopping other work and trying to dig out from a mountain of debt.” Kodumal recommends several practices, such as “establishing an update policy and cadence for third-party dependencies, using workflows and automation to manage the life cycle of feature flags, and establishing service-level objectives.” ... “The first and most important is proper planning and estimating. The second is to standardize procedures that limit time spent organizing and [allow] more time executing.” Most development teams want more time to plan, but it may not be obvious to product owners, development managers, or executives how planning helps reduce and minimize technical debt. When developers have time to plan, they often discuss architecture and implementation, and the discussions tend to get into technical details. Product owners and business stakeholders may not understand or be interested in these technical discussions.


12 examples of artificial intelligence in everyday life

Today, many larger banks give you the option of depositing checks through your smartphone. Instead of actually walking to a bank, you can do it with just a couple of taps. Besides the obvious safeguards when it comes to accessing your bank account through your phone, a check also requires your signature. Now banks use AIs and machine learning software to read your handwriting, compare it with the signature you gave to the bank before, and safely use it to approve a check. In general, machine learning and AI tech speeds up most operations done by software in a bank. This all leads to the more efficient execution of tasks, decreasing wait times and cost. ... And while we are on the subject of banking, let's talk about fraud for a little bit. A bank processes a huge amount of transactions every day. Tracking all of that, analyzing, it's impossible for a regular human being. Furthermore, how fraudulent transactions look changes from day to day. With AI and machine learning algorithms, you can have thousands of transactions analyzed in a second. Furthermore, you can also have them learn, figure out what problematic transactions can look like, and prepare themselves for future issues.


Is The Modern Data Warehouse Broken?

The first problem is the disconnect, really chasm, it creates between the data consumer (analysts/data scientists) and the data engineer. A project manager and a data engineer will build pipelines upstream from the analyst, who will be tasked with answering certain business questions from internal stakeholders. Inevitably, the analyst will discover that data will not answer all of their questions and that the program manager and data engineer have moved on. The second challenge arises when the analyst’s response is to go directly into the warehouse and write a brittle 600 line SQL query to get their answer. Or, a data scientist might find the only way they can build their model is to extract data from production tables which operate as the implementation details of services. The data in production tables are not intended for analytics or machine learning. In fact, service engineers often explicitly state NOT to take critical dependencies on this data considering it could change at any time. However, our data scientist needs to do their job so they do it anyway and when the table is modified everything breaks downstream.


An open invitation for women to join the Web3 movement

It’s important to understand some of the reasons why crypto has received the “boys club” reputation so we can smash it. At its core, I believe that because crypto was billed as a risky investment at the start. Women, who are naturally more risk averse, shielded away from the initial wave. Today, the gap between men and women in crypto aligns with the legacy of traditional investment verticals skewing toward men. ... In order for the movement to grow and gain legitimacy, we need everyone involved. I’d like to challenge men involved in Web3 to think of a woman they can invite to their next meeting. And, I’d like to challenge women to ask questions and see this opportunity as a way to align their wealth with men. This is a moment in which you can change the course of female wealth not just today, but well into the future. There are many women now joining the movement inviting others in, as well. It’s starting. And, I’m so pleased to be at the forefront of the shift. Web3 is making its debut in traditionally female venues now. Look no further than Shopify, the online sales platform, which reports 52% of its customers are women, is creating a marketplace for NFT sales.


It’s not enough for CEOs to empathize with employees

CEOs who live up to Doctorow’s caricature by shutting down their emotions and coldly making decisions that harm people also incur a personal cost. Hougaard adds: “You turn into someone who you probably won’t like.” Often, empathy is touted as the antidote to mean business. But Hougaard thinks that an approach to leadership based solely on empathy has its own adverse side effects. “Leaders can literally take on the suffering of the people that they are inflicting suffering on and experience empathy burnout,” he explained. “Many CEOs tell me that they make multibillion-dollar decisions and sleep fine at night. But when they have to give tough feedback to employees or restructure the workforce, they don’t sleep for weeks.” They’re missing sleep because they don’t realize that empathy is only the first step in dealing with emotionally fraught people issues. “The mantra here is: connect with empathy but lead with compassion,” said Hougaard. “Empathy is nice for people, because they’re not alone anymore, but it’s not really helping them to get out of their suffering. Compassion is an intention. ...”


Is your middle management freezing progress? 4 ways to empower change

It is important to demystify what organizational culture means and how it impacts business outcomes, customer success, and employee satisfaction. It doesn’t have to be a top-down narrative that’s adopted universally: culture can be created at a team level. Managers have a huge influence on the subculture of their part of the organization. Managers can proactively opt to create a positive organizational culture. Adopting an open leadership mindset combined with open management practices evidently impacts key outcomes like customer satisfaction, employee engagement, innovation, and profitability. For an employee, the organization begins with their manager. Managers need to ask “What is the experience I am creating for my team?” Ask basic questions like “when do we want to meet?” and “how do we want to organize ourselves?” If there are bigger decisions to be made, consider how teams could be involved. Now, more than ever, employees are looking for empathy from their executives, to be consulted on their future, not just to have a meaningful say in the decisions that affect them, but what’s being decided on in the first place.



Quote for the day:

"Open Leadership: the act of engaging others to influence and execute a coordinated and harmonious conclusion." -- Dan Pontefract

Daily Tech Digest - April 17, 2022

What is the 9-box talent review? A matrix for identifying top performers

The first step in using a 9-box grid is to assess an employee’s performance, which is typically done by evaluating performance reviews or using talent management systems. Managers are tasked with ranking employees based on performance and behavior, and then those rankings are passed onto upper management and leaders who can then identify and rank employees for their leadership potential. Employees can rank as low, medium, or high performance depending on how well they meet the requirements of their role. Low-performing employees are those who do not complete job requirements and regularly fail to meet assigned KPIs or other benchmarks. Employees who fall into the medium category are those who meet expectations part of the time and complete job requirements half of the time. High-performing employees reach all their necessary benchmarks and job duties, often surpassing them. Despite the fact that the 9-box grid puts an emphasis on the highest and lowest performers, it’s not designed to pit workers against one another or to make them feel as if they’re being ranked.


Approach cloud architecture from the outside in

Outside in moves in the opposite direction. You begin with the specific business requirements, such as what the business use cases are for specific solutions or, more likely, many solutions or applications. Then you move inward to infrastructure and other technologies specifically chosen to support the many solutions or applications required, such as databases, storage, compute, and other enabling technologies. Most cloud architects move from the inside out. They pick their infrastructure before truly understanding the solution’s specific purpose. They partner with a cloud provider or database vendor and pick other infrastructure-related solutions that they assume will meet their specific business solutions requirements. In other words, they pick a solution in the wide before they pick a solution in the narrow. This is how enterprises get solutions that function but are grossly underoptimized or, more often, have many surprise issues such as the ones discussed earlier. Discovering these issues requires a great deal of work and typically requires the team to remove and replace technology solutions on the fly.


The future of the internet: Inside the race for Web3’s infrastructure

The fastest way to provide reliable infrastructure to power DApp ecosystems is for centralized companies to set up a fleet of blockchain nodes, commonly housed in Amazon Web Services (AWS) data centers, and allow developers to access it from anywhere for a subscription. That is exactly what a few players in the space did, but it came at the price of centralization. This is a major issue for the Web3 economy, as it leaves the ecosystem vulnerable to attacks and at the mercy of a few powerful players. ... Decentralization is a key tenet of the Web3 economy, and centralized blockchain infrastructure threatens to undermine it. For instance, Solana has suffered multiple outages due to a lack of sufficient, decentralized nodes that could handle spiking traffic. This is a common problem for blockchain protocols that are trying to scale. ... Even more importantly, decentralized infrastructure competition results in greater decentralization of the Web3 economy. This is a good thing, as it makes the economy more resilient against attacks and censorship.


Enterprise architecture is based on business strategy, is it not?

Interestingly, many attempts to develop actionable plans out of business strategy to enable it are precluded, first of all, by the symbolic and elusive nature of strategy itself. For example, a rather common industry situation with business strategy can be vividly illustrated by the following jocular quote of Jeanne Ross, a former principal research scientist at MIT Sloan Center for Information Systems Research (CISR): ‘I remember IBM saying, “Our strategy is, we’re gonna raise share price to $11 per share”, and I thought, “Who the heck is gonna enable that strategy?”’. In fact, decades of research on information systems planning have long identified a broad spectrum of problems associated with business strategy as a basis for acting. Strategy can be vague, ambiguous and interpreted differently by different people (e.g. ‘become number one’ or ‘provide best services’). Strategy can be purely aspirational and consist of mere motivational slogans. Strategy can comprise various objectives and indicators offering no actionable hints, especially for IT. Strategy can be market sensitive, deliberately obscure and surrounded by secrecy.


6 Best Data Governance Practices

People, procedures, and technology are all critical aspects of data management. Keep all three elements in mind when developing and executing your data plan. However, you don’t have to improve all three areas simultaneously. Start with the essential components and work your way up to the final image. Begin with people, progress to the procedure, and conclude with technology. Before any component may proceed, it must build on top of the preceding ones for the whole data governance plan to be well-rounded. The process won’t work without the correct individuals. If the people and procedures in your company aren’t managing your data as you intended, no cutting-edge technology can suddenly repair it. Before developing a process, search for and hire the proper people. ... It is critical to track progress and display the effectiveness of your data governance strategy, just as it would be with any other shift. Once you’ve acquired executive buy-in for your business case, you’ll need evidence to support each stage of your transition. Prepare ahead of time to establish metrics before implementing data policies so that you can build a baseline based on your current data management strategies.


Data quality can make or break efforts to bring artificial intelligence to IT operations

The success of AIOps is inexorably tied to "data, data, data, and how well you can handle and process the data," Krishnamsetty agrees. One of the most vexing issues is data access and acquisition, he points out. "You want to pull data from your AWS environment, or your application performance monitoring tools, or your log analytics tool. But all this data is in different formats." RDA addresses the data challenges associated with AIOps, Krishnamsetty continues. "If you don't have the proper data, it's garbage-in, garbage-out. However, powerful your machine learning algorithms are, if your data quality is poor, you are not going to get good insights and analytics." For example, "if you look at any raw alerts coming from any of your management or monitoring systems, you will know how sparse the data is," he illustrates. "A human can't make a quick decision on it unless it is automatically enriched. The data is incomplete. What application, what infrastructure, and so forth." RDA also helps address the skills gap, which is in short supply for assuring the quality of data that is fed into AI systems, he continues. 


How Crypto Lending Platforms will revolutionize the Fintech Industry moving in 2022

The transformative role that crypto based lending platform can have cannot be understated. It gives the power to each person to become their own bank. Not only can they borrow from others at rates and conditions more favorable than traditional financial institutions, but they can also borrow against their own assets. For example, one could deposit their crypto assets and take out a loan against their cryptocurrency. SO, when it appreciates, they have an increased asset position, plus the ability to meet urgent need for liquidity. Credit forms the backbone of any healthy economy and the access to that credit determines it success in the global markets. Credit helps businesses and individuals grow in the backdrop of a growing economy. It provides business much needed capital to expand, maintain inventory, spend on research and development, and sustainably pay wages. Without easy access to credit, business is often placed under a glass ceiling which hampers their ability to grow. Thanks to the internet the world has gone global much faster than air travel ever connected us. 


Do You Need a Semantic Layer?

Most organizations don’t trust their data, leading to slow decisions or no decisions at all. In fact, according to the recent Chief Data Officer Survey, 72% of data and analytics leaders are heavily involved in or leading digital business initiatives, but they are uncertain how they can build a trusted data foundation to accelerate them. It’s not hard to see why a lack of trust in analytics outputs is so pervasive. Conflicting analytics outputs are all but assured when multiple business units, groups, business users, and data scientists prepare their analytics using their own business definitions and their own tools. A semantic layer can drive trust in data by empowering data self-service while ensuring the consistency, fidelity, and explainability of analytic outputs. With the fast pace of today’s business climate, waiting for a centralized data team to produce analytics for the business is a thing of the past. The self-service analytics revolution was born in response to the need for businesses to free themselves from the constraints of IT. 


Do CBDCs Need Blockchain? Growing Number of Central Banks Say No

It’s still too early to say that blockchain provides any definitive benefits, Dinesh Shah, the Bank of Canada’s director of FinTech research, told crypto industry news outlet The Block last week. Blockchain “is not a given but it’s still on our list of potentials,” when it comes to designing a CBDC, said Shah, who has expressed skepticism about the technology crypto is built on in the past. That is roughly where MIT’s researchers came down in a February test of technologies performed with the Federal Reserve Bank of Boston, which found that in a head-to-head test of a barebones CBDC design, a blockchain-based platform was far inferior. The blockchain-based platform was capable of only 10% of the scalability of a non-DLT system because of bottlenecks created by the need for a single and complete record of transactions in the order in which they were processed. Shah said that’s especially noteworthy because the Bank of Canada is collaborating with the Boston Fed and the Bank of England — also an MIT partner — on this research.


Test Case vs. Test Scenario: Key Differences to Note for Software Developers

It’s worth noting that test cases often form part of a test scenario. A test scenario is focused on an aspect of the project — for instance, "test the login function." Test cases are your means of checking if that aspect works as intended — in this case, that would be detailing the steps to take. ... Because test scenarios usually have one simple goal, the means of getting to that goal is more flexible than in test cases (where the process is more specific). The test documents will reflect these differences. A test case document will have specific guidelines for every case: the test case name, pre-conditions, post-conditions, description, input data, test steps, expected output, actual output, results, and status fields will all be laid out in the case document. ... In contrast, a test scenario document is open to interpretation by the team. They should identify the most important goal of the project and then design tests around reaching that goal. Test case scenarios allow for creativity on the part of the testers.



Quote for the day:

"Don't be buffaloed by experts and elites. Experts often possess more data than judgement." -- Colin Powell

Daily Tech Digest - April 16, 2022

The Challenge of Continuous Delivery in Distributed Environments

Most teams have insufficient insight into the current environment at each endpoint; therefore, failures take time to investigate, and often unique tweaks and fixes are needed to handle each change in the state of the distributed system. That’s why DevOps engineers are doing so much hand-coding. Engineers are finding they must stop the normal CI/CD flow, investigate what part of an endpoint infrastructure is not running, and then make manual tweaks to the software and deployment code to compensate for the change. Here’s the thing: there will always be changes to the system. Infrastructure environments never stay static, and therefore a lot of “continuous deployment” systems aren’t really continuous at all. Because DevOps engineers don’t always know the state of each endpoint environment in a distributed system, the CI/CD pipeline can’t possibly be adaptive enough. In the end, the process of ensuring continuous deployment in distributed environments can be extremely burdensome and complicated, slowing the pace of business innovation.


Confessions of a CTO

To truly ensure the organization’s stability, CTOs need to pay as much attention to the seemingly smaller tasks as they do the big transformational changes. This starts with having a rigorous diligent process by understanding where the business is today and looking in-depth for any weak spots. To do this, CTOs need to look towards the specialist solutions provided by the right vendor. Adoption of a configuration management tool can allow CTOs to have oversight of the whole IT suite, which is able to identify and track changes against a defined set of policies and flag any deviances for rectification. Policies that are devised from the Center for Internet Security (CIS) guidelines mean that CTOs have an established standard of security measures to work with, facilitating visibility and control to make required changes and pursue a continuous improvement strategy by achieving best practice configuration. For critical legacy applications that need to make the successful move to a newer operating system version, application compatibility packaging can allow for them to be transplanted to an on-prem, hybrid or cloud system without the need for any code modifications.


Better and faster: Organizational agility for the public sector

Despite the promise agile methodologies hold for the public sector, certain characteristics can make government entities a difficult fit for the agile model. Government budgets tend to follow longer time horizons—often annual—than agile cadences; internal competition for funding between agencies for a fixed pool of funding can discourage collaboration across government; and because the returns on investments in change are often dispersed within the government and to the public, it can be difficult to motivate employees to work for an upside they cannot necessarily see or experience. The public sector’s hierarchical structure—and its accompanying culture and ways of working—can also make implementing agile methodologies, such as flat organizations and fast iterations, difficult. ... Agile operating models configure teams based on facilitating outcomes instead of on function and expertise. This orientation can boost productivity and engagement by limiting handoffs between functional silos and focusing a wider array of skills on a shared objective. 


Software Architecture: It Might Not Be What You Think It Is

Architecting modern software applications is a fundamentally explorative activity. Teams building today’s applications encounter new challenges every day: unprecedented technical challenges as well as providing customers with new ways of solving new and different problems. This continuous exploration means that the architecture can’t be determined up-front, based on past experiences; teams have to find new ways of satisfying quality requirements. ... Some decisions will, inevitably and unavoidably, create technical debt; for example, the decision to meet reliability goals by using a SQL database has some side effects on technical debt (see Figure 1). The now long-past “Y2K problem” was a conscious decision that developers made at the time that reduced data storage, memory use, and processing time needs by not storing century data as part of standard date representations. The problem was that they didn’t expect the applications to last so long, long after those constraints became irrelevant. 


Decentralizing the grid: Operators test blockchain solutions

Digital identity enables greater cybersecurity and data ownership. While this use case speaks volumes about how the future of the energy market may take shape, the application of DIDs ultimately enables better cybersecurity for grid operators. For instance, when compared with traditional Web1 or Web2 approaches, Morris explained that most grid operators use a centralized database to manually enter information about sensors or hardware located on utilities within their network. Yet, such an approach could allow for grid operators to collect user data and even gain control of those sensors. “This level of centralization is a cybersecurity risk, which is why our solution with Stedin also proves to be a cybersecurity application,” Morris remarked. Jongepier added that Stedin was indeed looking to raise the bar on its cybersecurity. “Blockchain is effective for this because it provides the ground rules for utilizing decentralized identifiers for Stedin’s IoT assets, serving as a solution for raising the bar on security.” 


Neglecting The IAM Process Is Fighting A Losing Battle To Achieve Operational Excellence

The IAM process is a critical base for secure, cost-effective and efficient business operations. The foundation of IAM is comprised of the process first, followed by people, then technology. The spotlight on zero trust has witnessed sizeable traction, but most do not realize that to get that model off the ground, the identity process plays a vital role. There is no zero-trust model without a rock-solid identity process. Complex access permissions, loose processes within access management and insider threats are the most common reasons for a breach. A study sponsored by the Identity Defined Security Alliance found that 99% of security and identity professionals believed that identity-related breaches were preventable. And yes, it is preventable. Can you imagine not setting up a process to revoke access of a disgruntled employee or even someone gullible immediately after the employment is discontinued? The longer it takes to revoke access because there is no set protocol or process, the higher the chances of the organization being exposed. 


Edge computing moves toward full autonomy

Tung uses the term "phygital" to describe the result when digital practices are applied to physical experiences, such as in the case of autonomous management of edge data centers. "We see creating highly personalized and adaptive phygital experiences as the ultimate goal," she notes. "In a phygital world, anyone can imagine an experience, build it and scale it." In an edge computing environment that integrates digital processes and physical devices, hands-on network management is significantly reduced or eliminated to the point where network failure and downtime is automatically detected and resolved, and configurations are applied consistently across the infrastructure, making scaling simpler and faster. Automatic data quality control is another potential benefit. "This involves a combination of sensor data, edge analytics, or natural language processing (NLP) to control the system and to deliver data on-site," Gallina says. Yet another way an autonomous edge environment can benefit enterprises is with “zero touch” remote hardware provisioning remotely at scale, with the OS and system software downloaded automatically from the cloud.


Who is responsible for Cloud Native Security?

In the past, application developers and infrastructure staff worked in separate arenas. Sparring between the two was all too common. Today that boundary is blurred, with the work being shared between the various stakeholders. With respect to security, this is referred to as “shifting left”; that is, moving security testing efforts earlier – from operations to the development realm. This emergent approach puts increasing security responsibility on developers. It evolved when companies realized that code could no longer wait to run in a production environment before being tested for weaknesses. Rather, it’s far more efficient to test it earlier during development. The multiplicity of security roles is another aspect of this process. AppSec, DevSecOps, and product security all share responsibility for alerts, control, and resolution of various threats that target enterprise applications. Such significant changes don’t make application development easier for organizations. In today's more agile development models, where speed and automation rule, developers are under pressure to build and ship applications faster than ever.


Exploring the evolving security challenges within the metaverse

Much like the multiple national currencies that already exist in the real world, the metaverse will use its own currency or cryptocurrency. While crypto as a digital currency is set to develop over time, it could also lead to significant increase in “money laundering” attempts within the metaverse’s virtual economy. As these digital currencies are set to evolve, uncertainties surrounding their transferability from one metaverse to another and a lack of provision for secure exchanges between buyers and sellers could lead to the exploitation of the newly developed financial system by threat actors. ... At present, the metaverse poses significant security challenges as most of its users value interconnectivity and their user experience over intrusive online safety measures. This could exacerbate the security concerns or privacy issues that already exist within social media. Considering the inherent challenges imposed by web domains to govern or control areas beyond traditional national borders, the metaverse could also present itself as an unregulated environment to cyber criminals.


Meet the four forces shaping your workforce strategy

Four forces have shaped workforce strategies at key moments throughout human history—and they’re at it again. By understanding how the forces have operated in the past, you can better prepare your contemporary workforce to weather tomorrow’s challenges. ... Scarcity also emerges from technological shifts. For example, automation is creating redundancies in some fields, while a growing need for workers in advanced and emerging technologies is generating shortages in others. Demographic trends also help determine how scarce or plentiful workers are—and have huge economic and social implications. But scarcity isn’t just about head count or even dealing with the unprecedented challenges of the “great resignation”— it’s also about the abundance of skills your people have. For example, your company may have the right experts and specialists in place, and plenty of workers to fill vital roles. But you may still face a scarcity problem if your workforce lacks the broad-based skills it will need to succeed. The company may have a deficit in leadership or management skills, for example, or decision-making skills, project management skills, or even interpersonal skills. 



Quote for the day:

"Leaders respond & change; the rest quit and blame." -- Orrin Woodward