Daily Tech Digest - May 26, 2022

4 Reasons to Shift Left and Add Security Earlier in the SDLC

Collaboration is critical for the security and development teams, especially when timelines have to change. The security operations center (SOC) team may need to train on cloud technologies and capabilities, while the cloud team may need help understanding how the organization performs risk management. Understanding the roles and responsibilities of these teams and the security functions each fulfill is critical to managing security risks. In some scenarios, security teams can act as enablers for cloud engineering, teaching teams how to be self-sufficient in performing threat-modeling exercises. In other situations, security teams can act as escalation paths during security incidents. Last, security teams can also own and operate underlying platforms or libraries that provide contextual value to more stream-oriented cloud engineering teams, such as IAC scanning capabilities, shared libraries for authentication and monitoring, and support of workloads constructs, such as secure service meshes.

We have bigger targets than beating Oracle, say open source DB pioneers

The pitching of open source against Oracle's own proprietary database has shifted as the market has moved on and developers lead a database strategy building a wide range of applications in the cloud, rather than a narrower set of business applications. Zaitsev pointed out that if you look at the rankings on DB-Engines, which combines mentions, job ads and social media data, Oracle is always the top RDBMS. But a Stack Overflow survey would not even in put Oracle in the top five. So as developers are concerned, the debate about whether Oracle is the enemy is over. "The reality is, the majority of developers — especially good developers — prefer open source," he said. ... "There's a lot of companies now who are basically saying, 'Forget the Oracle API, I want to standardise on the PostgreSQL API.' They don't even want a non-PostgreSQL API because they see it is a growing market and opportunity with additional cost savings, flexibility, and continual innovation," he said, also speaking at Percona Live. "Years ago, if you had to rewrite your application from Oracle to PostgreSQL, that was a negative, that was a cost to you. ..."

Ultrafast Computers Are Coming: Laser Bursts Drive Fastest-Ever Logic Gates

The researchers’ advances have opened the door to information processing at the petahertz limit, where one quadrillion computational operations can be processed per second. That is almost a million times faster than today’s computers operating with gigahertz clock rates, where 1 petahertz is 1 million gigahertz. “This is a great example of how fundamental science can lead to new technologies,” says Ignacio Franco, an associate professor of chemistry and physics at Rochester who, in collaboration with doctoral student Antonio José Garzón-Ramírez ’21 (PhD), performed the theoretical studies that lead to this discovery. ... The ultrashort laser pulse sets in motion, or “excites,” the electrons in graphene and, importantly, sends them in a particular direction—thus generating a net electrical current. Laser pulses can produce electricity far faster than any traditional method—and do so in the absence of applied voltage. Further, the direction and magnitude of the current can be controlled simply by varying the shape of the laser pulse (that is, by changing its phase).

A computer cooling breakthrough uses a common material to boost power 740 percent

Researchers at the University of Illinois at Urbana-Champaign (UIUC) and the University of California, Berkeley (UC Berkeley) have recently devised an invention that could cool down electronics more efficiently than other alternative solutions and enable a 740 percent increase in power per unit, according to a press release by the institutions published Thursday. Tarek Gebrael, the lead author of the new research and a UIUC Ph.D. student in mechanical engineering, explained that current cooling solutions have three specific problems. "First, they can be expensive and difficult to scale up," he said. He brought up the example of heat spreaders made of diamonds which are obviously very expensive. Second, he described how conventional heat spreading approaches generally place the heat spreader and a heat sin (a device for dissipating heat efficiently) on top of the electronic device. Unfortunately, "in many cases, most of the heat is generated underneath the electronic device," meaning that the cooling mechanism isn't where it is needed most.

Tech firms are making computer chips with human cells – is it ethical?

Cortical Labs believes its hybrid chips could be the key to the kinds of complex reasoning that today’s computers and AI cannot produce. Another start-up making computers from lab-grown neurons, Koniku, believes their technology will revolutionise several industries including agriculture, healthcare, military technology and airport security. Other types of organic computers are also in the early stages of development. While silicon computers transformed society, they are still outmatched by the brains of most animals. For example, a cat’s brain contains 1,000 times more data storage than an average iPad and can use this information a million times faster. The human brain, with its trillion neural connections, is capable of making 15 quintillion operations per second. This can only be matched today by massive supercomputers using vast amounts of energy. The human brain only uses about 20 watts of energy, or about the same as it takes to power a lightbulb. It would take 34 coal-powered plants generating 500 megawatts per hour to store the same amount of data contained in one human brain in modern data storage centres.

SolarWinds: Here's how we're building everything around this new cybersecurity strategy

Now, SolarWinds uses a system of parallel builds, where the location keeps changing, even after the project has been completed and shipped. Much of this access is only provided on a need-to-know basis. That means if an attacker was ever able to breach the network, there's a smaller window to poison the code with a malicious build. "What we're really trying to achieve from a security standpoint is to reduce the threat window, providing the least amount of time possible for a threat actor to inject malware into our code," said Ramakrishna. But changing the process of how code is developed, updated and shipped isn't going to help prevent cyberattacks alone, which is why SolarWinds is now investing heavily in many other areas of cybersecurity. These areas include the likes of user training and actively looking for potential vulnerabilities in networks. Part of this involved building up a red team, cybersecurity personnel who have the job of testing network defences and finding potential flaws or holes that could be abused by attackers – crucially before the attackers find them.

How to stop your staff ignoring cybersecurity advice

While regular reminders are great, if you deliver the same message repeatedly, there is a danger that staff will zone out and ultimately become disengaged with the process. We’ve seen clear evidence of this over the past year, with awareness of key phrases falling, sometimes significantly. In this year’s State of the Phish Report, just over half (53%) of users could correctly define phishing, down from 63% the previous year. Recognition also fell across common terms like malware (down 2%) and smishing (down 8%). Ransomware(opens in new tab) was the only term to see an increase in understanding, yet only 36% could correctly define the term. ... Cybersecurity training may not sound like most people’s idea of fun, but there are plenty of ways to keep it positive and even enjoyable. Deliver training in short sharp models, and don’t be afraid to use different approaches such as animation or humor if it fits well into your company culture. Making security training competitive and turning it into a game can also aid the process. The gamification of training modules has been shown to increase engagement and motivation, as well as improving attainment scores in testing.

Why are current cybersecurity incident response efforts failing?

A risk-based approach to incident response enables enterprises to prioritize vulnerabilities and incidents based on the level of risk they pose to an organization. The simplest way of framing risk is a calculation on frequency of occurrence and severity. Malware frequently reaches endpoints, and response and clean-up can cost thousands of dollars (both directly and in lost productivity). Furthermore – and security teams all over the world would agree on this – vulnerabilities on internet-facing systems must be prioritized and remediated first. Those systems are continuously under attack, and as the rate of occurrence starts to approach infinity, so does risk. Similarly, there have been many threat groups that have costed enterprises millions directly, and in some cases tens of millions in lost operations and ERP system downtime. Large enterprises measure the cost of simple maintenance windows in ERP systems in tens of millions. Thus, it’s difficult to imagine the substantial calculations on a business-critical application breach. As severity increases to that order of magnitude, so does risk.

3 Must-Have Modernization Competencies for Application Teams

To decide the best path forward, leverage Competency #1. Architects and decision-makers should begin with automated architectural assessment tools to assess the technical debt of their monolithic applications, accurately identify the source of that debt, and measure its negative impact on innovation. These insights will help teams early in the cloud journey to determine the best strategy moving forward. Using AI-based modernization solutions, architects can exercise Competency #2 and automatically transform complex monolithic applications into microservices — using both deep domain-driven observability via a passive JVM agent and sophisticated static analysis‚ by analyzing flows, classes, usage, memory and resources to detect and unearth critical business domain functions buried within a monolith. Whether your application is still on-premises or you have already lifted and shifted to the cloud (Competency #3), the world’s most innovative organizations are applying vFunction on their complex “megaliths” to untangle complex, hidden and dense dependencies for business-critical applications that often total over 10 million lines of code and consist of thousands of classes.

The surprising upside to provocative conversations at work

To be sure, supporting and encouraging sensitive conversations isn’t easy. However, leaders can create the right conditions by establishing norms, offering resources, and helping ensure that these conversations happen in safe environments, with ground rules about avoiding judgment or trying to persuade people to change their minds. Critically, employees should always have the option to just show up and listen to better understand how colleagues are impacted by something happening in the world. The objective of these conversations should definitely not be to reach solutions or generate consensus. In that way, fostering these conversations is a growth opportunity for senior executives as well, who are often much more comfortable in problem-solving mode. The leader’s role here is to help the company bring meaning, humanity, and social impact to the workforce—not to deliver answers. The main takeaway for senior leaders is that you can’t isolate employees from the issues of the world. You can, however, help them sort through those issues and create a more welcoming, inclusive environment in which people are free to be their authentic selves—and maybe even learn from their colleagues.

Quote for the day:

"Cream always rises to the top...so do good leaders" -- John Paul Warren

Daily Tech Digest - May 25, 2022

Into the Metaverse: How Digital Twins Can Change the Business Landscape

With hybrid work becoming the norm, the mapping technology to build and manage workplace digital twins could also make it easier for startups to enter the market. New businesses that would otherwise need to invest in corporate real estate can achieve virtual flexibility at a lower cost. Because real-time mapping affords visualization of indoor assets, managers of airports or hospitals, for instance, can view multiple floors, entrances, stairwells and rooms to watch what's happening and where. We will likely see crossover in how this in-the-moment tracking of equipment and resources plays out in the metaverse and in the real world. ... While the metaverse will likely represent an avenue of escape and entertainment for many, there's the potential for it to be a valuable business tool with the capability to offer real-world simulations. It's something one consultant has been doing on such a scale as to mimic the effects of global warming and show how it will disrupt businesses and entire cities. Experiencing one's own replicated neighborhood relative to rising seas, encroaching storms and more, offers a visceral, relatable experience more likely to motivate action.

Infra-as-Data vs. Infra-as-Code: What’s the Difference?

On a high level, Infrastructure-as-Data tools like VMware’s Idem and Ansible, and Infrastructure-as-Code, dominated by Terraform, were created to help DevOps teams achieve their goals of simplifying and automating application deployments across multicloud and different environments, while helping to reduce manual configurations and processes. ... When cloud architectures need to be expressed using code, “you’re just writing more and more and more and more Terraform,” he said. “Idem is different from how you generally think of Infrastructure as Code — everything boils down to these predictable datasets.” “Instead of sitting down and saying, ‘I’m going to write out a cloud in Terraform,’ you can point Idem towards your cloud, and it will automatically generate all of the data and all of the code and the runtimes to enforce it in its current state.” At the same time, Idem, as well as Ansible to a certain extent, were designed to make cloud provisioning more automated and simple to manage.

How to develop competency in cyber threat intelligence capabilities

It is necessary to understand operating systems and networks principles at all levels: File storage, access management, log files policies, security policies, protocols used to share information between computers, et cetera. The core concepts, components and conventions associated with cyberdefense and cybersecurity should be identified, and a strong knowledge of industry best practices and frameworks is mandatory. Another core tenet is how defensive approaches and technology align to at least one of the five cyber defense phases: Identify, protect, detect, respond and recover. Key concepts to know here are identity and access management and control, network segmentation, cryptography use cases, firewalls, endpoint detection and response. signature and behavior based detections, threat hunting and incident response, and red and purple teams. One should develop a business continuity plan, disaster recovery plan and incident response plan. ... This part is all about understanding the role and responsibilities of everyone involved: Reverse engineers, security operation center analysts, security architects, IT support and helpdesk members, red/blue/purple teams, chief privacy officers and more.

Build collaborative apps with Microsoft Teams

Teams Toolkit for Visual Studio, Visual Studio Code, and command-line interface (CLI) are tools for building Teams and Microsoft 365 apps, fast. Whether you’re new to Teams platform or a seasoned developer, Teams Toolkit is the best way to create, build, debug, test, and deploy apps. Today we are excited to announce the Teams Toolkit for Visual Studio Code and CLI is now generally available (GA). Developers can start with scenario-based code scaffolds for notification and command-and-response bots, automate upgrades to the latest Teams SDK version, and debug apps directly to Outlook and Office. ... Microsoft 365 App Compliance Program is designed to evaluate and showcase the trustworthiness of application-based industry standards, such as SOC 2, PCI DSS, and ISO 27001 for security, privacy, and data handling practices. We are announcing the preview of the App Compliance Automation Tool for Microsoft 365 for applications built on Azure to help them accelerate the compliance journey of their apps.

How API gateways complement ESBs

In the modern IT landscape, service development has moved toward an API-first and spec-first approach. IT environments are also becoming increasingly distributed. After all, organizations are no longer on-premises or even cloud-only, but working with hybrid cloud and multicloud environments. And their teams are physically distributed, too. Therefore, points of integration must be able to span various types of environments. The move toward microservices is fundamentally at odds with the traditional, monolithic ESB. By breaking down the ESB monolith into multiple focused services, you can retain many of the ESB’s advantages while increasing flexibility and agility. ... As API standards have matured, the API gateway can be leaner than an ESB, focused specifically on cross-cutting concerns. Additionally, the API gateway is focused primarily on client-service communication, rather than on all service-to-service communication. This specificity of scope allows API gateways to avoid scope creep, keeping them from becoming yet another monolith that needs to be broken down. When selecting an API gateway, it is important to find a product with a clear identity rather than an extensive feature set.

Artificial intelligence is breaking patent law

Inventions generated by AI challenge the patent system in a new way because the issue is about ‘who’ did the inventing, rather than ‘what’ was invented. The first and most pressing question that patent registration offices have faced with such inventions has been whether the inventor has to be human. If not, one fear is that AIs might soon be so prolific that their inventions could overwhelm the patent system with applications. Another challenge is even more fundamental. An ‘inventive step’ occurs when an invention is deemed ‘non-obvious’ to a ‘person skilled in the art’. This notional person has the average level of skill and general knowledge of an ordinary expert in the relevant technical field. If a patent examiner concludes that the invention would not have been obvious to this hypothetical person, the invention is a step closer to being patented. But if AIs become more knowledgeable and skilled than all people in a field, it is unclear how a human patent examiner could assess whether an AI’s invention was obvious. An AI system built to review all information published about an area of technology before it invents would possess a much larger body of knowledge than any human could.

SIM-based Authentication Aims to Transform Device Binding Security to End Phishing

The SIM card has a lot going for it. SIM cards use the same highly secure, cryptographic microchip technology that is built into every credit card. It's difficult to clone or tamper with, and there is a SIM card in every mobile phone – so every one of your users already has this hardware in their pocket. The combination of the mobile phone number with its associated SIM card identity (the IMSI) is a combination that's difficult to phish as it's a silent authentication check. The user experience is superior too. Mobile networks routinely perform silent checks that a user's SIM card matches their phone number in order to let them send messages, make calls, and use data – ensuring real-time authentication without requiring a login. Until recently, it wasn't possible for businesses to program the authentication infrastructure of a mobile network into an app as easily as any other code. tru.ID makes network authentication available to everyone. ... Moreover, with no extra input from the user, there's no attack vector for malicious actors: SIM-based authentication is invisible, so there's no credentials or codes to steal, intercept or misuse.

How to Manage Metadata in a Highly Scalable System

The realization that current data architectures can no longer support the needs of modern businesses is driving the need for new data engines designed from scratch to keep up with metadata growth. But as developers begin to look under the hood of the data engine, they are faced with the challenge of enabling greater scale without the usual impact of compromising storage performance, agility and cost-effectiveness. This calls for a new architecture to underpin a new generation of data engines that can effectively handle the tsunami of metadata and still make sure that applications can have fast access to metadata. Next-generation data engines could be a key enabler of emerging use cases characterized by data-intensive workloads that require unprecedented levels of scale and performance. For example, implementing an appropriate data infrastructure to store and manage IoT data is critical for the success of smart city initiatives. This infrastructure must be scalable enough to handle the ever-increasing influx of metadata coming from traffic management, security, smart lighting, waste management and many other systems without sacrificing performance.

GDPR 4th anniversary: the data protection lessons learned

“As GDPR races to retrofit new legislative ‘add ons’ that most technology companies will have evolved well beyond by the time they’re implemented, GDPR is barely an afterthought for marketing professionals who are readying themselves for a much more seismic change this year: the crumbling of third-party cookies,” he explained. “Because of that, advertisers will require new, privacy-respecting, non-tracking-based approaches to reach their target audiences. Now, then, is the time for businesses to establish what a value exchange between users and an ad-funded, free internet actually looks like – but that goes far beyond the remit of GDPR. To increase focus on privacy in commercial settings, McDermott believes that major stakeholders such as Google need to “lead the charge” and collaborate when it comes to establishing a best practice on data capture. “For the smaller businesses,” he added, “it’ll be about forming an allegiance with bigger technology companies who have the resources to navigate these changes so they can chart a course together.”

Where is attack surface management headed?

Organizations increasingly suffer from a lack of visibility, drown in threat intelligence overload, and suffer due to inadequate tools. This means they struggle to discover, classify, prioritize, and manage internet-facing assets, which leaves them vulnerable to attack and incapable of defending their organization proactively. As attack surfaces expand, organizations can’t afford to limit their efforts to just identify, discover, and monitor. They must improve their security management by adding continuous testing and validation. More can and should be done to make EASM solutions more effective and reduce the number of tools teams need to manage. Solutions must also blend legacy EASM with vulnerability management and threat intelligence. This more comprehensive approach addresses business and IT risk from a single solution. When vendors integrate threat intelligence and vulnerability management in an EASM solution, in addition to enabling lines of business within the organization to assign risk scores based on business value, the value increases exponentially. 

Quote for the day:

"The greatest good you can do for another is not just share your riches, but reveal to them their own." -- Benjamin Disraeli

Daily Tech Digest - May 24, 2022

7 machine identity management best practices

When keys and certificates are static, it makes them ripe targets for theft and reuse, says Anusha Iyer, co-founder and CTO at Corsha, a cybersecurity vendor. "In fact, credential stuffing attacks have largely shifted from human username and passwords to API credentials, which are essentially proxies for machine identity today," she says. As API ecosystems are seeing immense growth, this problem is only becoming more challenging. Improper management of machine identities can lead to security vulnerabilities, agrees Prasanna Parthasarathy, senior solutions manager at the Cybersecurity Center of Excellence at Capgemini Americas. In the worst case, attackers can wipe out entire areas in the IT environment all at once, he says. "Attackers can use known API calls with a real certificate to gain access to process controls, transactions, or critical infrastructure – with devastating results." To guard against this, companies should have strict authorization of the source machines, cloud connections, application servers, handheld devices, and API interactions, Parthasarathy says. Most importantly, trusted certificates should not be static, he says.

Kalix: Build Serverless Cloud-Native Business-Crtical Applications with No Databases

Kalix aims to provide a simple developer experience for modelling and building stateful and stateless cloud-native, along with a NoOps experience, including a unified way to do system design, deployment, and operations. In addition, it provides a Reactive Runtime that delivers ultra-low latency with high resilience by continuously optimizing data access, placement, locality, and replication. When using currently available Functions-as-a-Service (FaaS) offerings, application developers need to learn and manage many different SDKs and APIs to build a single application. Each component brings its own feature set, semantics, guarantees, and limitations. In contrast, Kalix provides a unifying application layer that pulls together the necessary pieces. These include databases, message brokers, caches, service meshes, API gateways, blob storages, CDN networks, CI/CD products, etc. Kalix exposes them into one single unified programming model, abstracting the implementation details from its users. By bringing all of these components into a single package, developers don't have to set up and tune databases, maintain and provision servers, and configure clusters, as the Kalix platform handles this.

Snake Keylogger Spreads Through Malicious PDFs

The campaign—discovered by researchers at HP Wolf Security—aims to dupe victims with an attached PDF file purporting to have information about a remittance payment, according to a blog post published Friday. Instead, it loads the info-stealing malware, using some tricky evasion tactics to avoid detection. “While Office formats remain popular, this campaign shows how attackers are also using weaponized PDF documents to infect systems,” HP Wolf Security researcher Patrick Schlapfer wrote in the post, which opined in the headline that “PDF Malware Is Not Yet Dead.” Indeed, attackers using malicious email campaigns have preferred to package malware in Microsoft Office file formats, particularly Word and Excel, for the past decade, Schlapfer said. In the first quarter of 2022 alone, nearly half (45 percent) of malware stopped by HP Wolf Security used Office formats, according to researchers. “The reasons are clear: users are familiar with these file types, the applications used to open them are ubiquitous, and they are suited to social engineering lures,” he wrote. 

Paying the ransom is not a good recovery strategy

“One of the hallmarks of a strong Modern Data Protection strategy is a commitment to a clear policy that the organization will never pay the ransom, but do everything in its power to prevent, remediate and recover from attacks,” added Allan. “Despite the pervasive and inevitable threat of ransomware, the narrative that businesses are helpless in the face of it is not an accurate one. Educate employees and ensure they practice impeccable digital hygiene; regularly conduct rigorous tests of your data protection solutions and protocols; and create detailed business continuity plans that prepare key stakeholders for worst-case scenarios.” The “attack surface” for criminals is diverse. Cyber-villains most often first gained access to production environments through errant users clicking malicious links, visiting unsecure websites or engaging with phishing emails — again exposing the avoidable nature of many incidents. After having successfully gained access to the environment, there was very little difference in the infection rates between data center servers, remote office platforms and cloud-hosted servers.

Beneath the surface: Uncovering the shift in web skimming

Web skimming typically targets platforms like Magento, PrestaShop, and WordPress, which are popular choices for online shops because of their ease of use and portability with third-party plugins. Unfortunately, these platforms and plugins come with vulnerabilities that the attackers have constantly attempted to leverage. One notable web skimming campaign/group is Magecart, which gained media coverage over the years for affecting thousands of websites, including several popular brands. In one of the campaigns we’ve observed, attackers obfuscated the skimming script by encoding it in PHP, which, in turn, was embedded inside an image file—a likely attempt to leverage PHP calls when a website’s index page is loaded. Recently, we’ve also seen compromised web applications injected with malicious JavaScript masquerading as Google Analytics and Meta Pixel (formerly Facebook Pixel) scripts. Some skimming scripts even had anti-debugging mechanisms, in that they first checked if the browser’s developer tools were open. Given the scale of web skimming campaigns and the impact they have on organizations and their customers, a comprehensive security solution is needed to detect and block this threat.

Next generation PIM: how AI can revolutionise product data

An ideal AI-powered PIM solution addresses a gamut of data management needs that translate to benefits like analysing images and comparing them to product descriptions; translating texts automatically; analysing and comparing data; understanding the statistical rules; and correcting what doesn’t comply with those rules. The use of AI in PIM also helps create a contextualised, straightforward path for businesses to pursue by providing new insights accrued from various products and customer data sets across channels. With the right training, a neural network (deep learning) can be formed to sweep and analyse through metadata pertaining to different data sets for the delivery of accurate results across channels. Thus, it ultimately relieves organisations of time-consuming, repetitive tasks in managing changes or errors in their product data cycles. The role of PIM is constantly evolving; for example, in experiential retail, the PIM system needs to be implemented with AI for human context. Here, there is a change in both sides of the retailer-consumer dynamic, through product information management solutions that are expected to be more open network-oriented with AI.

IT Support for Edge Computing: Strategies to Make it Easier

IT vendors commonly assign account managers to major customer accounts for the purpose of managing relationships. If an issue arises, this account manager “point person” can summon the necessary resources and follow up to see that work and/or support is completed to a satisfactory resolution. IT can profit from the account manager approach with end users, especially if users have an abundance of edge applications and networks. An assigned business analyst who coordinates with tech support and others in IT can be the contact point person for an end-user department whenever a persistent problem occurs. This account manager can also periodically (at least quarterly) visit the user department and review technology performance and IT support. End users are more apt to communicate and cooperate with IT if they know they have someone to go to when they need to escalate an issue. ... There is no area of IT that is more qualified to give insights into how and where networks and systems are failing than technical support. This is because technical support is out there every day hearing about problems from end users, and then trouble-shooting the problems and deducing how they are happening.

How to Run Your Product Department Like a Coach

A key part of this new way of working was something that was drilled into me as an agile coach – keep teams together and give them time “to be teams”. Until this point, teams had formed and disbanded for each project, however, I knew that for us to move faster, the key would be high-performing teams and that takes time. Instead, we would try to keep people together and if needed, change their focus rather than disband them. This has easily been one of the most successful parts of a new way of working I brought to accuRx. As part of this focus, I worked closely with the CTO to establish clear leadership and accountability within each team. We agreed that every team would have a PM/TL (technical lead) pair, with both being held jointly accountable for the team being healthy and effective at delivering at pace. This “leadership in pairs” system has been crucial in allowing us to scale quickly whilst holding ourselves to account. The final piece of the jigsaw was ensuring that I was able to influence (or own) what our organisational structure would look like for Product (and Engineering).

Managed cloud services: 4 things IT leaders should know

Managed cloud services still require some internal expertise if you want to maximize your ROI – they should supercharge the IT team, not take its place. You can certainly use cloud managed services to do more with less – the constant marching order in today’s business world – and attain technological scale that wouldn’t otherwise be possible. But you should still do so in the context of your existing team and future hiring plans. “When developing a cloud-managed service strategy, you need to consider that we are now combining what used to be two separate sides of the house, infrastructure and application development,” DeCurtis notes. DeCurtis notes that skills such infrastructure as code will be essential for complex cloud services environments. If you’re already a mature DevOps shop, then you’re ahead of the game. Other teams may have some learning to do – and leadership may realize that people that can blend once-siloed job functions can be tough to find – though not as impossible as it once seemed. “Fortunately, these roles are becoming more readily available as organizations continue to adopt cloud strategies,” DeCurtis. 

IT risk management best practices for organisations

When we talk about risk, what we really mean is each organisation’s unique set of vulnerabilities. These loopholes are monitored, generically and specifically, by bad actors who would exploit them for financial or political gain, or occasionally just for clout. The first step, then, is to understand centres of risk within your organisation. These evolve with tech advances and behavioural change, for example with the transition to hybrid working brought on by the Covid-19 pandemic. “This has presented new challenges with expanded networks beyond the traditional office environment: no physical barriers or access controls, reduced VPN effectiveness, more endpoints and a greater attack surface to monitor,” says Folliss. “Remote working distorts an IT security team’s ability to manage and control the network and introduces new threats and vulnerabilities – and thus new risk.” So your analysis can’t be a one-off, rather a continuous, rigorous, and honest programme of testing and assessment that gets to the heart of an organisation’s DNA, says Pascal Geenens, director of threat intelligence at Radware.

Quote for the day:

"Confident and courageous leaders have no problems pointing out their own weaknesses and ignorance." -- Thom S. Rainer

Daily Tech Digest - May 23, 2022

Clearview AI ordered to delete facial recognition data belonging to UK residents

The ICO said Clearview violated several tenets of UK data protection law, including failing to use data in a way that is “fair and transparent” (given that residents’ images were scraped without their knowledge or consent), “failing to have a lawful reason for collecting people’s information,” and “failing to have a process in place to stop the data being retained indefinitely.” However, although ICO has issued a fine against Clearview and ordered the company to delete UK data, it’s unclear how this might be enforced if Clearview has no business or customers in the country to sanction. In response to a similar deletion order and fine issued in Italy under EU law earlier this year, Clearview’s CEO Hoan Ton-That responded that the US-based company was simply not subject to EU legislation. ... In response to the same query, Lee Wolosky of Jenner and Block, Clearview’s legal representatives, told The Verge: “While we appreciate the ICO’s desire to reduce their monetary penalty on Clearview AI, we nevertheless stand by our position that the decision to impose any fine is incorrect as a matter of law ... ”

AI for Software Developers: a Future or a New Reality?

On the one hand, AI authors don’t copy anything into the algorithm. On the other hand, the neural network is incapable of independent thinking. All the code it produces is a combination of fragments it has seen during the learning phase. It may even create pieces of code that look like exact copies from the training dataset. The point is that even pieces that look independent are no more independent than the copies. The problem is pretty new, and we haven’t seen any court decisions yet. This uncertainty slows down the progress of product developers: people don’t want to make significant investments into something that might become illegal tomorrow. We faced the same issue when creating our code completion system. In addition to the potential legal limitations, there were technical difficulties as well. The code we can find in an open-source repository is in some sense “complete”. It usually compiles, passes simple tests, has clear formatting, doesn’t contain duplicate blocks or temporary debug sections. However, the code we have to work with in the editor is not “complete” most of the time.

What is JPA? Introduction to the Jakarta Persistence API

From a programming perspective, the ORM layer is an adapter layer: it adapts the language of object graphs to the language of SQL and relational tables. The ORM layer allows object-oriented developers to build software that persists data without ever leaving the object-oriented paradigm. When you use JPA, you create a map from the datastore to your application's data model objects. Instead of defining how objects are saved and retrieved, you define the mapping between objects and your database, then invoke JPA to persist them. If you're using a relational database, much of the actual connection between your application code and the database will then be handled by JDBC. As a specification, JPA provides metadata annotations, which you use to define the mapping between objects and the database. Each JPA implementation provides its own engine for JPA annotations. The JPA spec also provides the PersistanceManager or EntityManager, which are the key points of contact with the JPA system

What’s so great about Google’s ‘translation glasses’?

Unlike Google Glass, the translation-glasses prototype is augmented reality (AR), too. Let me explain what I mean. Augmented reality happens when a device captures data from the world and, based on its recognition of what that data means, adds information to it that’s available to the user. Google Glass was not augmented reality — it was a heads-up display. The only contextual or environmental awareness it could deal with was location. Based on location, it could give turn-by-turn directions or location-based reminders. But it couldn’t normally harvest visual or audio data, then return to the user information about what they were seeing or hearing. Google’s translation glasses are, in fact, AR by essentially taking audio data from the environment and returning to the user a transcript of what’s being said in the language of choice. Audience members and the tech press reported on the translation function as the exclusive application for these glasses without any analytical or critical exploration, as far as I could tell. The most glaring fact that should have been mentioned in every report is that translation is just an arbitrary choice for processing audio data in the cloud.

Augmented reality, superhuman abilities and the future of medicine

With AR headsets and new techniques for registering 3D medical images to a patient’s real body, the superpower of x-ray vision is now a reality. In an impressive study from Teikyo University School of Medicine in Japan, an experimental emergency room was tested with the ability to capture whole-body CT scans of trauma patients and immediately allow the medical team, all wearing AR headsets, to peer into the patient on the exam table and see the trauma in the exact location where it resides. This allowed the team to discuss the injuries and plan treatment without needing to refer back and forth to flat screens, saving time, reducing distraction, and eliminating the need for mental transformations. In other words, AR technology takes medical images off the screen and places them in 3D space at the exact location where it’s most useful to doctors – perfectly aligned with the patient’s body. Such a capability is so natural and intuitive, that I predict it will be rapidly adopted across medical applications. In fact, I expect that in the early 2030s doctors will look back at the old way of doing things, glancing back and forth at flat screens, as awkward and primitive.

My Instagram account was hacked and two-factor authentication didn't help

It turns out the combination of the URL on the image and my reply gave them enough information to take over my account. Now, even when I saw trouble brewing -- an Instagram e-mail came asking me if I wanted to change my phone number to one in Nigeria -- I wasn't too worried. I'd protected my account with two-factor authentication (2FA). While 2FA isn't perfect, it's better than anything else out there for basic security. But, here's where things went awry. Instagram should have sent me an e-mail with a link asking me to "revert this change." Instagram didn't send such a message. Instead, I received e-mails from security@mail.instagram.com that provided a link about how to "secure your account." This dropped me into Instagram's pages for a hacked account, which wasn't any help. ... Argh! I followed up with Instagram's suggestions on how to bring my account back. I asked for a login link from my Android Instagram app. I got one, which didn't work. Next, I requested a security code. I got one. That didn't work either, no doubt because -- by that time -- the account was now responding to its "new" e-mail address and phone number.

What Gen Z and millennials want from employers

“The recurring theme with Gen. Z — beside the compensation piece — is the focus on workplace flexibility and mental health. Those are two places we see a huge divergence form other generations,” Remley said. “If we’d talk to Boomers or Gen Xers concerning mental health benefits, they would say that’s my business and not my employer’s business. Whereas, Gen Z is wanting assistance with mental health from their employers.” Benefits ranked high in both surveys as reasons workers are drawn to and want to remain with an organization. At the top of the list: good mental healthcare and healthcare benefits in general. And, employers do seem to be making progress when it comes to prioritizing mental health and well-being in the workplace, Deloitte reported. "More than half agree that workplace well-being and mental health has become more of a focus for their employers since the start of the pandemic. However, there are mixed reviews on whether the increased focus is actually having a positive impact," Deloitte's report stated.

Q&A: What CDW UK has planned for 2022

Innovating for sustainability will continue to be a key focus for us and our customers, and we are committed to finding new ways to help them on their journeys to net zero in any way we can. Not only does focusing on sustainability ensure business continuity by conserving resources but customers and employees want to buy from and work for companies that share their values. We believe sustainability is a shared responsibility and we want to set a strong example. Through our beGreen program, we provide coworkers with the platform to share ideas and take collective action to improve our environment. Areas of focus include coworker education, community awareness, recycling and resource conservation. The program is managed by a cross-functional team of coworkers from multiple CDW locations. This team collaborates internally and with members of the communities where we operate. Sustainability can no longer be a secondary consideration, which is why we’re also in the process of developing a global plan to make realistic, attainable and strong commitments to being a more sustainable organisation ourselves, while working with our partners and customers to do the same.

IDaaS explained: How it compares to IAM

IDaaS isn’t all sunshine and rainbows though, and organizations much account for some major considerations when evaluating it. If identity is truly the new perimeter, adopting IDaaS gives some level of control of your perimeter to an IDaaS service provider. This is similar to the shared responsibility model concept in cloud computing but extended further up the stack from not just infrastructure but to critical things such as identities, permissions, and access control. Some of the benefits cited in the above table can now potentially be a vice or point of contention depending on your organizational requirements and security sensitivity. Since you are consuming the application and system associated with IAM, you now are limited to the permissions the providers offering includes and likely have limited ability to alter the way the offering functions. This is due to the reality that the IDaaS provider offers their interface/application to many customers and can only have so much customization without losing the ability to have a standardized offering. 

Building a learning culture with AI

The first element is an AI model, which uses both internal and external data to assess competencies against a core skill set we are seeking to assess and develop; e.g., a full-stack engineer. It compares employees’ skill sets to someone in a similar role or title in the external marketplace on a scale of 1 to 5. We also pull in internal data sources, such as Jira and Workday, which contain information from their resumes, for example. That helps strengthen the accuracy and correlation of the model. The second element used to assess skill sets is an employee self-assessment. Employees receive the results of the AI model, and they validate whether they believe their skills are in line with the AI assessment. The final prong is the manager assessment, in which the manager rates the skills of that individual employee. This approach to assessing skill sets has been valuable for several reasons. First, it ensures the use of objective information in the evaluation process, reducing the influence of subjective views that managers may have, based on limited interactions with employees.

Quote for the day:

"One of the sad truths about leadership is that, the higher up the ladder you travel, the less you know." -- Margaret Heffernan

Daily Tech Digest - May 22, 2022

6 business risks of shortchanging AI ethics and governance

When enterprises build AI systems that violate users’ privacy, that are biased, or that do harm to society, it changes how their own employees see them. Employees want to work at companies that share their values, says Steve Mills, chief AI ethics officer at Boston Consulting Group. “A high number of employees leave their jobs over ethical concerns,” he says. “If you want to attract technical talent, you have to worry about how you’re going to address these issues.” According to a survey released by Gartner earlier this year, employee attitudes toward work have changed since the start of the pandemic. Nearly two-thirds have rethought the place that work should have in their life, and more than half said that the pandemic has made them question the purpose of their day job and made them want to contribute more to society. And, last fall, a study by Blue Beyond Consulting and Future Workplace demonstrated the importance of values. According to the survey, 52% of workers would quit their job — and only 1 in 4 would accept one — if company values were not consistent with their values. 

The Never-Ending To-Do List of the DBA

Dealing with performance problems is usually the biggest post-implementation nightmare faced by DBAs. As such, the DBA must be able to proactively monitor the database environment and to make changes to data structures, SQL, application logic, and the DBMS subsystem itself in order to optimize performance. ... Applications and data are more and more required to be up and available 24 hours a day, seven days a week. Globalization and e-business are driving many organizations to implement no-downtime, around-the-clock systems. To manage in such an environment, the DBA must ensure data availability using non-disruptive administration tactics. ... Data, once stored in a database, is not static. The data may need to move from one database to another, from the DBMS into an external data set, or from the transaction processing system into the data warehouse. The DBA is responsible for efficiently and accurately moving data from place to place as dictated by organizational needs. ... The DBA must implement an appropriate database backup and recovery strategy for each database file based on data volatility and application availability requirements. 

The brave, new world of work

The recent disruptions to the physical workplace have highlighted the importance of the human connections that people make on the job. In an excerpt from her new book, Redesigning Work, Lynda Gratton of the London Business School plays off an insight made nearly 50 years ago by sociologist Mark Granovetter. Granovetter famously discussed the difference between “weak” and “strong” social ties and showed that, when it came to finding jobs, weak ties (the loose acquaintances with whom you might occasionally exchange an email but don’t know well) could actually be quite powerful. Gratton applies this thinking to the way that networks are formed on the job, and to how people organize to get their work done, get new information, and innovate. She concludes that, especially in an age of remote and hybrid work, companies have to redouble their efforts to ensure that employees are able to establish and mine the power of weak ties. For Gratton, the ability to create such connections is a must-have. ... Now more than ever, people have to engage in the often challenging task of drawing boundaries. 

Most-wanted soft skills for IT pros: CIOs share their recruiting tips

Today’s IT organizations are called upon to drive and deliver significant transformation as technology seeps into all corners of a company and its products and services. With that, new and refined skills are necessary for successful technology leaders to influence business outcomes, innovation, and product development. Empathy, managing ambiguity, and collaborative influence drive innovation and are attributes we look for at MetaBank as we hire and develop top talent. Empathy lies at the core of successful problem-solving – viewing a problem from various angles leads to better solutions. ... Leaders often face challenging circumstances where they must quickly make a tough call with insufficient information. Making good choices in these situations can be critical for an organization’s success. It isn’t always easy to assess this in an interview, but behavioral interview questions and careful follow-up can help elicit specific examples from a candidate’s past work experience that may shed light on their judgment.

6 key steps to develop a data governance strategy

Much of the daily work of data governance occurs close to the data itself. The tasks that emerge from the governance strategy will often be in the hands of engineers, developers and administrators. But in too many organizations, these roles operate in silos separated by departmental or technical boundaries. To develop and apply a governance strategy that can consistently work across boundaries, some top-down influence is required. ... Horror stories of fines for breaching the EU's GDPR law on data privacy and protection might keep business leaders awake at night. This drastic approach may generate some interoffice memos or even unlock some budgetary constraints, but that would be a defensive reaction and possibly create resentment among stakeholders, which is no way to secure long-term good data governance. Instead, try this incremental approach, which should be much more attractive to executives: "Data governance is something we already do, but it's largely informal and we need to put some process around it. In doing so, we will meet regulatory demands, but we will also be a more functional, resilient organization."

8 Master Data Management Best Practices

When software development began embracing agile methodologies, its value to the business skyrocketed. That’s why we believe a MDM best practice is to embrace DataOps. hen software development began embracing agile methodologies, its value to the business skyrocketed. That’s why we believe a MDM best practice is to embrace DataOps. DataOps acknowledges the interconnected nature of data engineering, data integration, data quality, and data security/privacy. It aims to help organizations rapidly deliver data that not only accelerates analytics but also enables analytics that were previously deemed impossible. DataOps provides a myriad of benefits ranging from “faster cycle times” to “fewer defects and errors” to “happier customers.” (source) By adopting DataOps, your organization will have in place the practices, processes, and technologies needed to accelerate the delivery of analytics. You’ll bring rigor to the development and management of data pipelines. And you’ll enable CI/CD across your data ecosystem.

5 tips for building your innovation ecosystem

A common mistake when looking for innovative technology vendors is to look at companies touted as the most innovative or to go with best-of-breed, on the assumption that innovation is baked into their roadmap. It’s likely that neither approach will net you the innovation you’re looking for. Best-of-breed works well for internal IT such as your ERP or CRM, or anything under the covers in terms of client-facing solutions, but when it comes to your value proposition and differentiation you need to look elsewhere. In this case, the best-of-breed tools become the table stakes that you utilize as the foundation for your ecosystem or industry-cloud and your core IP comprises your own IP plus that of those innovative players that you’ve developed unique relationships with. The “most innovative” lists you find on the internet are often based on public or editor opinion and end up surfacing the usual suspects with strong brand awareness. While they may be leading players in the market, this does not guarantee continued innovation. If you do look at the “most innovative” lists, be sure to check the methodology involved and see how it fits your own definition and expectations for what constitutes innovation.

Zen and the Art of Data Maintenance: All Data is Suffering (Part 1)

Data can be used for many types of nefarious activities. For instance, an article in Wired described how a website stored video data regarding child sex abuse acts and how they used this data in threatening, destructive ways leading to all sorts of suffering including suicide attempts.[i] We are often bombarded with social media data (both factual and misinformation) that are designed to hold our attention through emotional disturbances such as fear. These are generally intended to elicit reactions or control behavior regarding many matters including purchasing, voting, mindshare, or almost any other matter. Have you suffered with data? How? Data is the plural form of the Latin word, ‘datum’, which Merriam Webster defines as ‘something given or admitted as a basis for reasoning or inference’. Thus, everything we receive through our senses could be considered data. It could be numbers, text, things we see, hear, or feel. But how could all data be suffering? What about positive data that communicates increased sales, better health, positive comments, data showing helpful contributions, and so on? 

The Metamorphosis of Data Governance: What it Means Today

There’s nothing more galvanizing to an organization’s board of directors—or the C-Level executives who directly answer to it—than stiff monetary penalties for noncompliance to regulations. Zoom reached a settlement for almost $100 million dollars for such issues. Even before this particular example, data governance was inexorably advancing to its current conception as a means of facilitating access control, data privacy, and security. “These are big ticket fines that are coming up,” Ganesan remarked. “Boards are saying we need to have guardrails around our data. Now, what has changed in the last few years is that part of governance, which is security and privacy, is going from being passive to more active.” Such activation not only entails what data governance focuses on, but also what the specific policies it’s comprised of focus on, too. The regulatory, risk mitigation side of data governance is currently being emphasized. It’s no longer adequate to have guidelines or even rules about how data are accessed on paper—top solutions in this space can propel those policies into source systems to ensure adherence when properly implemented. 

Five Steps Every Enterprise Architect Should Take for Better Presentation

Architects invariably care about the material they’re discussing. The mistake is believing or assuming that the audience cares as intently. They may. They may already be familiar with the content. This may simply be a status update on the latest digital transformation project and everyone is knowledgeable about the subject matter. ... Generally speaking, the audience isn’t going to automatically care as much about the material as does the Architect presenting. The key to this step is usually the hardest of all the points made in this article. The key is empathy. Thinking what you would do or what you would be interested in if you were the listener is not empathy. That’s simply you projecting your own headspace onto the audience. Trying to understand how that person is receiving your information is the key. Why do they care, what aspects will they be interested in. To do this requires knowing in advance who you will be speaking to and knowing their background, their education, their professional position, their issues or problems with the subject at hand… knowing, in effect, through what lens they will be viewing your content.

Quote for the day:

"Leadership should be born out of the understanding of the needs of those who would be affected by it." -- Marian Anderson

Daily Tech Digest - May 21, 2022

How to make the consultant’s edge your own

What actually works, should the organization be led by a braver sort of leadership team, is a change in the culture of management at all levels. The change is that when something bad happens, everyone in the organization, from the board of directors on down, assumes the root cause is systemic, not a person who has screwed up. In the case of my client’s balance sheet fiasco, the root cause turned out to be everyone doing exactly what the situation they faced Right Now required. What had happened was that a badly delayed system implementation, coupled with the strategic decision to freeze the legacy system being replaced, led to a cascade of PTFs (Permanent Temporary Fixes to the uninitiated) to get through month-end closes. The PTFs, being temporary, weren’t tested as thoroughly as production code. But being permanent, they accumulated and sometimes conflicted with one another, requiring more PTFs each month to get everything to process. The result: Month ends did close, nobody had to tell the new system implementation’s executive sponsor about the PTFs and the risks they entailed, and nobody had to acknowledge that freezing the legacy system had turned out to be a bad call.

SBOM Everywhere: The OpenSSF Plan for SBOMs

The SBOM Everywhere working group will focus on ensuring that existing SBOM formats match documented use cases and developing high-quality open source tools to create SBOM documents. Although some of this tooling exists today, more tooling will need to be built. The working group has also been tasked with developing awareness and education campaigns to drive SBOM adoption across open source, government and commercial industry ecosystems. Notably, the U.S. federal government has taken a proactive stance on requiring the use of SBOMs for all software consumed and produced by government agencies. The Executive Order on Improving the Nation’s Cybersecurity cites the increased frequency and sophistication of cyberattacks as a catalyst for the public and private sectors to join forces to better secure software supply chains. Among the mandates is the requirement to use SBOMs to enhance software supply chain security. For government agencies and the commercial software vendors who partner and sell to them, the SBOM-fueled future is already here.

Cybersecurity pros spend hours on issues that should have been prevented

“Security is everyone’s job now, and so disconnects between security and development often cause unnecessary delays and manual work,” said Invicti chief product officer Sonali Shah. “Organizations can ease stressful overwork and related problems for security and DevOps teams by ensuring that security is built into the software development lifecycle, or SDLC, and is not an afterthought,” Shah added. “Application security scanning should be automated both while the software is being developed and once it is in production. By using tools that offer short scan times, accurate findings prioritized by contextualized risk and integrations into development workflows, organizations can shift security left and right while efficiently delivering secure code.” When it comes to software development, innovation and security don’t need to compete, according to Shah. Rather, they’re inherently linked. “When you have a proper security strategy in place, DevOps teams are empowered to build security into the very architecture of application design,” Shah said.

SmartNICs power the cloud, are enterprise datacenters next?

For all the potential SmartNICs have to offer, there remains substantial barriers to overcome. The high price of SmartNICs relative to standard NICs being one of many. Networking vendors have been chasing this kind of I/O offload functionality for years, with things like TCP offload engines, Kerravala said. "That never really caught on and cost was the primary factor there." Another challenge for SmartNIC vendors is the operational complexity associated with managing a fleet of SmartNICs distributed across a datacenter or the edge. "There is a risk here of complexity getting to the point where none of this stuff is really usable," he said, comparing the SmartNIC market to the early days of virtualization. "People were starting to deploy virtual machines like crazy, but then they had so many virtual machines they couldn't manage them," he said. "It wasn't until VMware built vCenter, that companies had one unified control plane for all their virtual machines. We don't really have that on the SmartNIC side." That lack of centralized management could make widespread deployment in environments that don't have the resources commanded by the major hyperscalers a tough sell.

Fantastic Open Source Cybersecurity Tools and Where to Find Them

Organizations benefit greatly when threat intelligence is crowdsourced and shared across the community, said Sanjay Raja, VP of product at Gurucul. "This can provide immediate protection or detection capabilities," he said. “While reducing the dependency on vendors who often do not provide updates to systems, for weeks or even months.” For example, CISA has an Automated Indicator Sharing platform. Meanwhile in Canada, there's the Canadian Cyber Threat Exchange. "These platforms allow for the real-time exchange and consumption of automated, machine-readable feeds," explained Isabelle Hertanto, principal research director in the security and privacy practice at Info-Tech Research Group. This steady stream of indicators of compromise can help security teams respond to network security threats, she told Data Center Knowledge. In fact, the problem isn't the lack of open source threat intelligence data, but an overabundance, she said. To help data center security teams cope, commercial vendors are developing AI-powered solutions to aggregate and process all this information. "We see this capability built into next generation commercial firewalls and new SIEM and SOAR platforms," Hertanto said.

Living better with algorithm

Together with Shah and other collaborators, Cen has worked on a wide range of projects during her time at LIDS, many of which tie directly to her interest in the interactions between humans and computational systems. In one such project, Cen studies options for regulating social media. Her recent work provides a method for translating human-readable regulations into implementable audits. To get a sense of what this means, suppose that regulators require that any public health content — for example, on vaccines — not be vastly different for politically left- and right-leaning users. How should auditors check that a social media platform complies with this regulation? Can a platform be made to comply with the regulation without damaging its bottom line? And how does compliance affect the actual content that users do see? Designing an auditing procedure is difficult in large part because there are so many stakeholders when it comes to social media. Auditors have to inspect the algorithm without accessing sensitive user data. They also have to work around tricky trade secrets, which can prevent them from getting a close look at the very algorithm that they are auditing because these algorithms are legally protected.

CFO perspectives on leading agile change

In an agile organization, leadership-level priorities cascade down to inform every part of the business. For this reason, CFOs talked extensively about the importance of setting up a prioritization framework that is as objective as possible. Many participants mentioned that it can be challenging to work out priorities through the QBR process, because different teams lack an institutional mechanism through which to weigh different work segments against one another and prioritize between them. Most CFOs agreed that some degree of direction from the top is required in this area. One CFO said he thinks of his organization as a “prioritization jar”: leadership puts big stones in the jar first and then fills in the spaces with sand. These prioritization “stones” might be six key projects identified by management, or they might be 20 key initiatives chosen through a mixture of leadership direction and feedback from tribes. A second challenge emerged regarding shifting resources among teams or clusters responsible for individual initiatives. When asked what they would do if they had a magic wand, several CFOs said they need better ways to reallocate resources at short notice. 

Friend Or Foe: Delving Into Edge Computing & Cloud Coputing

One of the most significant features of edge computing is decentralization. Edge computing allows for using resources and communication technologies via a single computing infrastructure and the transmission channel. Edge computing is a technology that optimizes computational needs by utilizing the cloud at its edge. When it comes to gathering data or when someone does a particular action, real-time execution is possible wherever there is a need for that. The two most significant advantages of edge computing are increased performance and lower operational expenses. ... The first thing to realize is that cloud computing and edge computing are not rival technologies. They aren’t different solutions to the same problem; rather, they’re two distinct ways of addressing particular problems. Cloud computing is ideal for scalable applications that must be ramped up or down depending on demand. Extra resources can be requested by web servers, for example, to ensure smooth service without incurring any long-term hardware expenses during periods of heavy server usage. 

Why AI and autonomous response are crucial for cybersecurity

Remote work has become the norm, and outside the office walls, employees are letting down their personal security defenses. Cyber risks introduced by the supply chain via third parties are still a major vulnerability, so organizations need to think about not only their defenses but those of their suppliers to protect their priority assets and information from infiltration and exploitation. And that’s not all. The ongoing Russia-Ukraine conflict has provided more opportunities for attackers, and social engineering attacks have ramped up tenfold and become increasingly sophisticated and targeted. Both play into the fears and uncertainties of the general population. Many security industry experts have warned about future threat actors leveraging AI to launch cyber-attacks, using intelligence to optimize routes and hasten their attacks throughout an organization’s digital infrastructure. “In the modern security climate, organizations must accept that it is highly likely that attackers could breach their perimeter defenses,” says Steve Lorimer, group privacy and information security officer at Hexagon.

Service Meshes Are on the Rise – But Greater Understanding and Experience Are Required

We explored the factors influencing people’s choices by asking which features and capabilities drive their organization’s adoption of service mesh. Security is a top concern, with 79% putting their faith in techniques such as mTLS authentication of servers and clients during transactions to help reduce the risk of a successful attack. Observability came a close second behind security, at 78%. As cloud infrastructure has grown in importance and complexity, we’ve seen a growing interest in observability to understand the health of systems. Observability entails collecting logs, metrics, and traces for analysis. Traffic management came third (62%). This is a key consideration given the complexity of cloud native that a service mesh is expected to help mitigate. ... Potential issues here include latency, lack of bandwidth, security incidents, the heterogeneous composition of the cloud environment, and changes in architecture or topology. Respondents want a service mesh to overcome these networking and in-service communications challenges.

Quote for the day:

"To command is to serve : nothing more and nothing less." -- Andre Marlaux