Daily Tech Digest - June 21, 2023

India’s digital transformation could be a game-changer for economic development

India currently has a data-fiduciary-centric model. Individuals or small businesses must go to the original keeper of data to access their data. This inhibits the use of data for the financial empowerment of individuals. The current method of storing financial data across institutions and companies is also inefficient, resulting in the use of notarized hard copies, PDFs, screen scraping, password sharing, etc., all of which pose a threat to individual privacy. Accessing and sharing information can be difficult because of the varied formats. This forces individuals and institutions to rely on patchwork solutions. ... AAs can be thought of as traffic police between Financial Information Users (FIUs) and Financial Information Providers (FIPs), with users having complete control over the flow of information. The introduction of AA architecture could revolutionize how financial data is shared, similar to the impact UPI has had on money transfers. The AA ecosystem is cross-sectoral, with customers at the center. AAs provide a secure interface that allows users to consent to share private and sensitive data. This democratizes data use and sharing, enabling FIUs to request users' financial information.


8 ways to detect (and reject) terrible IT consulting advice

Recommendations are great, but they don’t automatically turn into solutions. “Most of the consultant’s dialogue should be repeating back to you the problem they’re solving,” advises Bill Carslay, senior vice president and general manager of professional services at IT support services firm Rimini Street. “The resulting solution should be directly related to the problem as it’s defined in your terms, and should follow the steps and phases your organization is willing to take.” When a consultant grabs onto a common IT challenge and quickly describes how they will solve it, it’s likely the solution won’t fully address the very specific problem an organization may be facing. “Keep in mind that one size doesn’t fit all, and be on the lookout for recommendations that fit or augment the parameters you’ve set,” Carslay suggests. ... When advice lacks logical reasoning, contradicts data, or fails to consider long-term consequences, it’s likely terrible. “A critical mind and rigorous evaluation will help you distinguish the good from the bad,” says Edward Kring, vice president of engineering at software development company Invozone.com.


Three Data Removal Myths That Provide a False Sense of Security

There are many ways to attempt to remove a file -- such as data deletion, wiping, factory reset, reformatting, and file shredding -- but without proper context, these solutions independently can be incomplete. For example, deleting a file and emptying the recycle bin can remove pointers to files containing data but not the data itself. The data is easily recoverable until the data is overwritten. A factory reset removes all used data as it restores a device to factory settings, but not all methodologies used in resets lead to complete erasure, and there’s no way to validate that all data is gone. Data wiping is the process of overwriting data without verification. File shredding destroys data on individual files by overwriting the space with a random pattern of 1s and 0s. Because neither method provides verification that the process was completed successfully across all sectors of the device, they are considered incomplete. Finally, reformatting, which is performed on a working disk drive to eradicate its contents, is another method where most of the data can be recovered with forensics tools available online.


Measuring engineering velocity misses all the value

Story point velocity has become the dominant driver of agile software development lifecycles (SDLCs) with the rise of scrum. How many story points did the team complete this week? How can we get them to deliver more points while still meeting the acceptance criteria? Speed is treated as synonymous with success, and acceleration is hailed as the primary focus of any successful engineering enterprise. Deliver more story points and you’re clearly “doing the thing.” The impulse is not without some logic. From the C-suite perspective, a perfect product that misses its moment on the market isn’t worth much. Sure, it may be full of engineering genius, but if it generates little to no business value, it quickly becomes more “museum relic” than “industry game-changer.” It pays to be first. In fact, one study found that accelerating time to market by just 5% could increase ROI by almost 13%. However, I believe that a simplistic obsession with speed misses several factors critical to optimizing the actual impact of any software solution.


Developers’ Role in Protecting Privacy

Although sharing data has become commonplace in exchange for benefits and value, consumers are becoming more aware of privacy issues. Take the EU’s General Data Protection Regulations (GDPR) as an example. Over the past five years, awareness has more than doubled in notable European markets such as the UK, Spain, Germany, the Netherlands and France. Meanwhile, there is also commercial pressure, as employers rely on developers to innovate to remain profitable. At the same time, customers expect brands to be responsible with their data, and failure to do so at the expense of trying to commercialize a new application could be detrimental. Indeed, while the pandemic may have ushered in significant changes and altered consumers’ attitudes toward data privacy, end users remain unwavering about the importance of security. Maintaining this balancing act is becoming increasingly complex to achieve. However, the question of data privacy is becoming a key business priority, and that means developers have a big opportunity to show their commercial value to their organizations. 


Why CISOs should be concerned about space-based attacks

Making matters worse is the tendency for many satellites to be ‘dual use’ carriers, in that they provide services that are used by both commercial and military clients. As such, “US commercial satellites may be seen as legitimate targets in case they are used in the conflict in Ukraine,” reported the Russian state-owned news agency TASS on October 27, 2022. Speaking before the UN General Assembly’s First Committee, Russian Foreign Ministry official Konstantin Vorontsov threatened that, “Quasi-civil infrastructure may be a legitimate target for a retaliation strike.” This has certainly been true for SpaceX’s Starlink satellite broadband service in Ukraine. "Some Starlink terminals near conflict areas were being jammed for several hours at a time,” SpaceX CEO Elon Musk said in a Twitter message posted on March 5, 2022. “Our latest software update bypasses the jamming. Am curious to see what’s next!” Such threats and actions come as no surprise to Laurent Franck, a satellite consultant and ground systems expert with the Euroconsult Group. Whenever a commercial satellite “can be used on a battlefield and used in a war context, it becomes a target,” he says. 


Who Is Responsible for Identity Threat Detection and Response?

For organizations just starting to develop an ITDR program, Jones recommends they start by conducting a thorough risk assessment to identify critical assets and potential threats. “Assign a dedicated ITDR owner or team responsible for coordinating prevention, detection, and response efforts, and develop a comprehensive ITDR plan that outlines roles, responsibilities, and processes for each stage of the ITDR lifecycle,” he says. He adds it’s important to regularly test and update the ITDR plan, incorporating lessons learned from past incidents and staying informed about the latest threats and technologies. Craig Debban, CISO for QuSecure, explains for a lot of organizations, there is a dependence on a disparate set of systems that are on-prem, in the cloud, or both -- and they are not always well integrated. "User identities are then decentralized since they are replicated in multiple places,” he says. “This diversity leads to gaps in functionality for the end user, negatively impacts operational efficiency, and is often overcome by oversubscribing permissions which impacts overall security and risk across the business.”


You can’t be an averagely talented programmer

In some ways, the level of engineering capability which people need is only going to become higher in terms of writing these AI systems and being able to engineer them. That said, this only applies to the very best programmers. You can’t be an averagely talented programmer anymore. With some of our large operations it’s clear by the way they are adopting automation that we won’t need a large number of developers. We will start having fewer people of that kind. People who actually understand engineering are going to become more in demand, and the people who just operate the technology will be less valuable. ... Right now, the technology industry needs a lot of people. But I see a lot of people who don’t really understand the technology or worse, they are afraid of technology. A lot of people who do not come from a computer science background can be working for tech companies but really are afraid of the technology. That’s not sustainable. Having a genuine interest in technology is, I would say, an important condition to reaching or exceeding your potential in a tech firm. Understand what’s happening in technology and do not be afraid of it.


How to Choose the Right Identity Resolution System

A best-in-class approach to identity resolution enables you to match many identifiers to the same person and then set the priority of matching to control how profiles are stitched together. ... While deterministic identity resolution might seem overly rigorous, it’s actually highly beneficial for personalization. Personalization use cases (sending an email, delivering a recommendation, and so on) require 100% confidence that a user is who you think they are. The only way to guarantee that confidence is through a deterministic identity algorithm. The alternative is simply guesswork and increases the likelihood that your personalization (or lack thereof) will have a detrimental impact on your customer relationships. A deterministic identity resolution solution enables 100% reliable profile unification, honoring the exact first-party data a customer provides to a brand. More importantly, embracing a deterministic approach as the core of your identity strategy will allow you to build high-quality customer profiles that power the personalized experiences customers have come to expect.


How to Become a Business Intelligence Analyst

As much as business intelligence can be about interpersonal action, much of an analyst’s duties are solitary ones, chief among these authoring procedures for data processing and collection. From there on, expect reporting and more reporting, including analytical reports that can be personalized for the needs of stakeholders, highlighting the most departmentally relevant findings. A business intelligence analyst also needs to maintain an active role in the various life cycles of data as it moves throughout the organization. After all, data reports are built upon regularly monitoring the way data is collected, looking at field reports, product summaries from third parties, and even through public record. As a function of this, a BIA may want to continually track burgeoning trends in tech or emerging markets that could potentially offer efficiency or value within the industry and their specific enterprise. Working in concert with specialists in data governance and stewardship, a BIA must oversee the integrity, security, and location of data storage. 



Quote for the day:

"A coach is someone who can give correction without causing resentment." -- John Wooden

Daily Tech Digest - June 20, 2023

How to navigate the co-management conundrum in MSP engagements

Ironically, enterprises can often suppress innovation by using MSPs transactionally. If the enterprise team has active roles in the delivery of services, it can help mitigate against thinking transactionally and foster a more cooperative style from both parties. If the enterprise team behaves transactionally, because they don’t work alongside the MSP but focus only on inputs and outputs or reported results, then, eventually, the MSP team can also tend to behave more transactionally. This places an unwanted governor on good ideas and flexibility from within the established collective resources. This doesn’t mean that there isn’t the need to have a robust management framework, including a statement of work (SOW) where commitments are clearly articulated. However, even if obligations are ultimately with the MSP, co-management of some of the task inputs or signoffs under a SOW can sometimes lead to more pragmatic, dispute-avoiding working practices.


ChatGPT and data protection laws: Compliance challenges for businesses

ChatGPT is not exempt from data protection laws, such as the General Data Protection Regulation (GDPR), the Health Insurance Portability and Accountability Act (HIPAA), the Payment Card Industry Data Security Standard (PCI DSS), and the Consumer Privacy Protection Act (CPPA). Many data protection laws require explicit user consent for the collection and use of personal data. ... By utilizing ChatGPT and sharing personal information with a third-party organization like OpenAI, businesses relinquish control over how that data is stored and used. This lack of control increases the risk of non-compliance with consent requirements and exposes businesses to regulatory penalties and legal consequences. Additionally, data subjects have the right to request the erasure of their personal data under the GDPR’s “right to be forgotten.” When using ChatGPT without the proper safeguards in place, businesses lose control of their information and no longer have mechanisms in place to promptly and thoroughly respond to such requests and delete any personal data associated with the data subject.


Navigating Cloud Costs and Egress: Insights on Enterprise Cloud Conversations

One of the things that we’ve seen at the enterprise scale is not just cloud egress cost, but the combination of cloud spend and being able to predict spend has been a constant topic of conversation. With the economic downturn, one of the things that we’re seeing is definitely more control over where money is being spent. I wouldn’t say it’s specifically about egress costs. ... The point that I’m trying to make is it kind of goes both ways. Some businesses extended the effect of the economic downturn – and just looking at the trend over a longer period of time, not just now in the last one or two years – is that the more sophisticated the organization is in terms of their capability of operating multiple environments, like an on-prem and the cloud or two clouds, the more likely they are to not buy into the “all-in” cloud. ... A lot of times what we heard from our clients was “I want to be on a cloud. On-prem data centers are done.” But I think about two or three years back is when we saw a wave of conversations in between. [They said] “Okay, I realize that all-in on cloud is not going to be my future.”


Hijacked S3 buckets used in attacks on npm packages

This latest threat is part of a growing trend of groups looking at the software supply chain as an easy way to deploy their malware and quickly have it reach a broad base of potential victims. Through attacks on npm and other repositories like GitHub, Python Package Index (PyPI), and RubyGems, miscreants look to place their malicious code in packages that are then downloaded by developers and used in their applications. In this case, they found their way in via the abandoned S3 buckets, part of AWS object storage services that enable organizations to store and retrieve huge amounts of data – files, documents, and images, among other digital content – in the cloud. They're accessed via unique URLs and used for such jobs as hosting websites and backing up data. The bignum package used node-gyp, a command-line tool written in Node.js, for downloading a binary file that initially was hosted on a S3 bucket. If the bucket couldn't be accessed, the package was prompted to look for the binary locally. "However, an unidentified attacker noticed the sudden abandonment of a once-active AWS bucket," Nachshon wrote.


Ending the ‘forever war’ against shadow IT

First, CIOs should establish a quick-reaction team (QRT) that deals only with these small projects that user departments are looking to achieve — especially when it comes to leveraging AI. The QRT needs to be an elite group within IT comprising members who understand the risks of data manipulation, are well versed in security pitfalls, and follow developments in AI enough to know its opportunities and pitfalls. It would be the mission of this group to analyze the requirements and assure that data access is secure and that the user understands the nature of the data being accessed. The QRT would also need to analyze the parameters of the work to be done to assure that the results are not already available from another existing source. They would also determine whether the software is compatible with the existing corporate network. This becomes even more critical if, at some point, the company wishes to scale the application to serve the entire corporation. Second, the shadow IT policy must be understood and enforced by the IT steering committee. 


Your AI coding assistant is a hot mess

As testified by Reeve’s wasted hours of bug-hunting, AI tools certainly aren’t foolproof. They’re often trained on open-source code, which frequently contains bugs – mistakes that the assistant is prone to replicating. They’re also notoriously prone to wild delusions, a fact, says Desrosiers, that cybercriminals can use to their advantage. AI coding assistants are liable to occasionally make up the existence of entire coding libraries. “Malicious actors can detect these hallucinations and launch malicious libraries with these names,” he says, “putting at risk people who let these hallucinated libraries execute in their production environment.” Careful oversight, says Desrosiers, is the only solution. That, too, can be facilitated by AI. “To de-risk this and other potential issues [at Visceral], we build single-purpose autonomous coding assistants to monitor for such threats,” says Desrosiers. David Mertz says it’s always important to not be too trusting. “From a security perspective, you just can’t trust code,” says the author and long-time Python programmer. 


Apple beefs up enterprise identity, device management

It’s important to note that account driven user enrollment was largely designed as a way for users to enroll their personal devices into MDM, while corporate devices are typically managed with a more traditional profile-based enrollment that gives IT more access and management options. Apple is now offering account driven device enrollment that offers added capabilities for IT with a user experience similar to account-driven user enrollment. ... Along with improving the enrollment options, Managed Apple IDs will get more management capabilities. There are two major additions. The first is to control which types of managed devices a user is allowed to access: any device regardless of ownership, only managed devices enrolled via MDM, or only devices that are Supervised. Supervised devices are company-owned and have stringent management controls. The next biggest of these features is the ability to control which iCloud services a user can access on a managed device. Each sync service can be enabled or disabled for a user’s Managed Apple ID. 


Prime minister Rishi Sunak faces pressure from banks to force tech firms to pay for online fraud

In response to TSB CEO’s letter last week, a Meta spokesperson said in a statement: “This is an industry-wide issue and scammers are using increasingly sophisticated methods to defraud people in a range of ways, including email, SMS and offline. We don’t want anyone to fall victim to these criminals, which is why our platforms already have systems to block scams, financial services advertisers now have to be FCA authorised to target UK users and we run consumer awareness campaigns on how to spot fraudulent behaviour. People can also report this content in a few simple clicks and we work with the police to support their investigations.” But, in the letter to Sunak, banks said they want the tech companies to stop fraud on their platforms and to contribute to refunds for victims. They also called for a public register showing the failure of tech giants to stop scams. The letter warned that the high level of fraud was “having a material impact on how attractive the wider UK financial sector is perceived by inward investors, which as we know, is critical for the health of the City of London and wider UK economy”.


Why assessing third parties for security risk is still an unsolved problem

The challenge that TPRM companies have is rather simple: Provide a mechanism for companies that do business with other companies to evaluate the risk that their vendors present to them, from a cybersecurity perspective. SecurityScorecard and its primary competitor, BitSight, use a similar methodology: Create a risk score (sort of like your credit score), evaluate companies, and score them. ... The credit reporting agencies, for better or worse, have much more data than the TPRM scoring companies. They’re embedded throughout our financial system, collecting a lot of information that shouldn’t be publicly available. The TPRM scoring companies, on the other hand, are doing the equivalent of drive-by appraisals. They look at the outside of businesses on the internet and decide how reputable they are based on their external appearances. Of course, certain business types will look more secure than others. The alternative to TPRM scoring is, sadly, the TPRM questionnaire industry, which is only marginally less unhelpful. This is an industry focused on shipping massive questionnaires to vendors, which take huge efforts to fill out.


Debugging Production: eBPF Chaos

Tools and platforms based on eBPF provide great insights, and help debugging production incidents. These tools and platforms will need to prove their strengths and unveil their weaknesses, for example, by attempting to break or attack the infrastructure environments and observe the tool/platform behavior. At a first glance, let’s focus on Observability and chaos engineering. The Golden Signals (Latency, Traffic, Errors, Saturation) can be verified using existing chaos experiments that inject CPU/Memory stress tests, TCP delays, DNS random responses, etc. ... Continuous Profiling with Parca uses eBPF to auto-instrument code, so that developers don’t need to modify the code to add profiling calls, helping them to focus. The Parca agent generates profiling data insights into callstacks, function call times, and generally helps to identify performance bottlenecks in applications. Adding CPU/Memory stress tests influences the application behavior, can unveil race conditions and deadlocks, and helps to get an idea of what we are actually trying to optimize.



Quote for the day:

"Uncertainty is a permanent part of the leadership landscape. It never goes away." -- Andy Stanley

Daily Tech Digest - June 19, 2023

Finding the Nirvana of information access control or something like it

In the mythical land of Nirvana, where everything is perfect, CISOs would have all the resources they needed to protect corporate information. The harsh reality, which each CISO experiences on the daily, is that few entities have unlimited resources. Indeed, in many entities when the cost-cutting arrives, it is not unusual for security programs that have not (so far) positioned themselves as a key ingredient in revenue preservation to be thrown by the wayside — if you ever needed motivation to exercise access control to information, there you have it. ... For those who thought they were finished with Boolean logic in secondary school, its back — and attribute-based access control (ABAC) is a prime example of the practicality of utilizing the logic in decision trees to determine access permission. The adoption of ABAC allows access to protected information to be “hyper-granular.” An individual’s access may be initially defined by one’s role and certainly fall within the established policies. 


Goodbyes are difficult, IT offboarding processes make them harder

To ensure that the business continues even though the employee is gone, stale accounts are created with grace periods during which the employee’s credentials can still be used to access the organization’s networks. This is great for retaining the knowledge this employee accumulated and ensuring that their replacement is well-briefed, but since the employee is gone, nobody will remember to monitor their account, as malicious actors will soon notice. This employee may also have been forwarding emails to their personal email account or accessing their work email from personal devices for business purposes, making it easier for hackers to obtain sensitive company data and impossible for the organization to know. Existing offboarding processes may frustrate business executives due to their rigidity – and they aren’t alone in their annoyance. What’s bad for security is also, inevitably, bad for business. Security teams today must manually ensure that all access privileges, including access to various systems, applications, databases and physical facilities, be promptly terminated.

Leaders are made, not born: Although this is technically correct, which is why we rarely see 5 year olds running companies or countries (though, in fairness, the adults that do often fail to provide convincing signs of superior emotional or intellectual maturity), people’s potential for leadership can be detected at a very young age. Furthermore, the dispositional enablers that increase people’s talent for leadership have a clear biological and genetic basis. ... The best leaders are confident: Not true. Although confidence does predict whether someone is picked for a leadership role, once you account for competence, expertise, intelligence, and relevant personality traits, such as curiosity, empathy, and drive, confidence is mostly irrelevant. And yet, our failure to focus on competence rather than confidence, and our lazy tendency to select leaders on style rather than substance (such as during presidential debates, job interviews, and short-term in person interactions), contributes to most of the leadership problems described in point 1. Note that when leaders have too much confidence they will underestimate their flaws and limitations, putting themselves and others at risk.


How Organizations Can Create Successful Process Automation Strategies

Organizations can promote more collaboration by adopting a modified “Center of Excellence” (CoE) approach. In some companies, that might mean assembling a community devoted to process automation tasks and strategies, in which practitioners can share best practices and ask questions of one another. The CoE should help members from business and IT teams work together better by coordinating tasks, avoiding reinventing projects from scratch, and generally empowering them to drive continuous improvement together. Some organizations may want to create a central focus on process automation without using the actual CoE term. The terminology itself carries some legacy baggage from centralized Business Process Management (BPM) software. Some relied on a centralized approach for their CoE, counting on one team to implement process automation for the entire organization. That approach often led to bottlenecks for both developers and a line of business leaders, giving the CoE a bad reputation with few demonstrable results.


8 habits of highly secure remote workers

By working in a public place you are exposing yourself to serious cybersecurity risks. The first, and most direct one is over-the-shoulder attacks, also known as shoulder surfing. All this takes is for an observant, determined hacker to be sitting in the same space as you paying close attention to your every move. ... "As you use public Wi-Fi, you are exposing your laptop or your device to the same network somebody else can log on to so that means they can actually peruse through your network, depending on the security of the local network on your laptop," says Gartner VP Analyst, Patrick Hevesi. Doing work in a public space while also not using public Wi-FI may seem like a paradox, but there are simple and secure solutions. The first is using a VPN when accessing corporate information in public. ... "Your security is as good as your password, because that's the first first line of defense," says Shah. "You want to make sure that you have a good strong password, and also don't use the same password for all the other sites you may be accessing."


Multicloud deployments don't have to be so complicated

The solution to these problems is not scrapping a complex cloud deployment. Indeed, considering the advantages that multicloud can bring (cost savings and the ability to leverage best-of-breed solutions), it’s often the right choice. What gets enterprises in trouble is the lack of an actual plan that states where and how they will store, secure, access, manage, and use all business data no matter where it resides. It’s not enough to push inventory data to a single cloud platform and expect efficiencies. We’re only considering data complexity here; other issues also exist, including access to application functions or services and securing all systems across all platforms. Data is typically where enterprises see the problems first, but the other matters will have to be addressed as well. A solid plan tells a complete data access story and includes data virtualization services that can make complex data deployments more usable by business users and applications. It also enables data security and compliance using a software layer that can reduce complexity with abstraction and automation. Simple data storage is only a tiny part of the solution you need to consider.


E-Commerce Firms Are Top Targets for API, Web Apps Attacks

Attack vectors, such as server-side template injection, server-side request forgery and server-side code injection, have also become popular and may lead to data exfiltration and remote code execution. "This, in turn, may be playing a role in preventing online sales and damaging a company's reputation," the researchers said, citing an Arcserve survey in which 60% of consumers said they wouldn't buy from a website that had been breached in the previous 12 months. SSTI is a hacker favorite for zero-day attacks. Its use is well-documented in "some of the most significant vulnerabilities in recent years, including Log4j," the researchers said. Hackers mainly targeted commerce companies with Log4j, and 58% of all exploitation attempts happened in the space. The Hafnium criminal group popularized SSRFs, which they used to attack Microsoft's Exchange Servers and reportedly launched a supply chain cyberattack that affected 60,000 organizations, including commerce. Hafnium used the SSRF vulnerability to run commands to the web servers, according to the report.


It’s going to take AI to power AI

AI in the datacentre has the ability to act as a pair of eyes, keeping a keen watch on every aspect of the facility to detect and prevent threats. Analysing data from sources such as online access logs and network traffic would allow AI systems to watch for and alert organisations to cyber breaches in seconds. Further, we’re heading in the direction where AI-powered sensors could apply human temperature checks and facial recognition to monitor for physical intrusions. Ultimately, AI will have the opportunity to tune datacentres to operate like well-oiled machines, making sure all components work in harmony to deliver the highest level of performance in our AI-hungry world – a world pressurised by a cost-of-energy crisis and expanding cyber security threats. While the reality is more nuanced, put plainly, it is going to take AI to power AI. In fact, Gartner estimates that half of all cloud datacentres will use AI by 2025. It’s going to be a productive couple of years for industry developing one of the fastest-growing technologies, rolling it out, and doing so in a way that ensures trust.


Beyond ChatGPT: What is the Business Value of Generative Artificial Intelligence?

Beyond the attraction to the technology itself, generative AI has huge potential business value. Regardless of the processes, professions, or sectors of activity involved, the common thread among artificial intelligence projects is their shared objective of enabling, expediting, or enhancing human actions, either by facilitating or accelerating them. The use of AI usually starts with a question, or a problem. This is immediately followed by the analysis of a significant amount of exogenous information or endogenous information, with the aim of obtaining an answer to the question or problem through the creation of information useful to humans: aiding decision-making, detecting an anomaly, analyzing a hand-drawn schema, prioritizing problems to be solved, etc. More broadly, the automated generation of information makes it easier and safer to streamline some processes, such as moving from an idea to a first version by allowing for quicker validation or failure recognition, A/B testing, and simplified re-experimentation. 


Even in cloud repatriation there is no escaping hyperscalers

Hansson’s blog sparked pushback from cloud advocates like TelcoDR CEO Danielle Royston. She contended in an interview with Silverlinings that those using the cloud aren’t just paying for servers, but also for the proprietary tools the different cloud giants provide, the salaries they pay their top-tier developer talent, the hardware upgrades they make available to cloud users and the built-in security they offer. For those who use the cloud to its full potential, she said, the cloud is “the gift that keeps on giving.” Not only that, but those looking to repatriate workloads will need to invest significant time and money to transition back and hire more staff to develop new applications and manage the on-prem servers, she added. ... So, who’s right? Well, it seems the answer will vary by company and even by application. Pichai explained the cloud is the ideal environment for a small handful of workloads, namely “vanilla applications” which incorporate only standard rather than specialized features and “spikey applications” which need to scale on demand to accommodate irregular patterns of usage.



Quote for the day:

"To be an enduring, great company, you have to build a mechanism for preventing or solving problems that will long outlast any one individual leader" -- Howard Schultz

Daily Tech Digest - June 18, 2023

4 Advances In Penetration Testing Practices In 2023

Penetration testing has evolved significantly over the past few years, with a growing emphasis on mimicking real-life cyberattack scenarios for greater accuracy and relevance. By adopting more realistic simulation strategies, pen testers aim to emulate threats that an organization might realistically face in their operational environment, thereby providing valuable insights into susceptibilities and vulnerabilities. This approach entails examining an organization’s infrastructure from multiple angles, encompassing technological weaknesses as well as human factors such as employee behavior and resistance to social engineering attacks. ... With cyber threats constantly scaling and tech landscapes evolving at a rapid pace, automation enables organizations to efficiently identify potential weaknesses without sacrificing accuracy or thoroughness. Automated tools can expedite vulnerability assessment processes by scanning networks for known flaws or misconfigurations while continuously staying up-to-date with emerging threat information, significantly reducing manual workloads for security teams. 


Microservices vs. headless architecture: A brief breakdown

In general, the microservice-based approach requires that architects and developers determine exactly which microservices to build, which is not an easy task. Software teams must carefully assess how to achieve the best balance between application complexity and modularity when designing a microservices application. There are also few standards or guidelines that dictate the exact number of individual microservice modules an application should embody. While including too many microservices can add unnecessary development and operations overhead as well as compromise the architecture's flexibility, a headless architecture is much easier to design since there is still a clear definition between the front and the backend. Division of responsibilities will remain much clearer, and the relationship between components is less likely to get lost in translation. A single microservice-based application can easily represent dozens of individual services running across a complex cluster of servers. Each service must be deployed and monitored separately because each one could impact the performance of other microservices. 


The Power Of The Unconscious Mind: Overcoming Mental Obstacles To Success

Bringing our unconscious mind into alignment and reconciliation with our conscious mind requires a level of self-awareness that many people are unable to achieve independently. Individuals who are struggling with achieving goals and don’t know why may find it helpful to work with an objective outside observer, such as a therapist or a professional coach, who can help them identify thought and behavior patterns that may be holding them back from advancing in work or life. Ultimately, to break out of these self-limiting beliefs, it’s important to change one’s thinking, particularly in areas when self-abnegating thoughts have been dominating our lives for far too long. When I’m working with clients, I try to help them develop what’s called a “growth mindset”—that is, an inherent belief in one’s own ability to constantly learn new skills, gain new capabilities and improve. People who have a growth mindset do not see failures as the end of the road, or as confirmation of the self-limiting, critical beliefs they’ve internalized throughout their lives.


How AI and advanced computing can pull us back from the brink of accelerated climate change

AI is one of the significant tools left in the fight against climate change. AI has turned its hand to risk prediction, the prevention of damaging weather events, such as wildfires and carbon offsets. It has been described as vital to ensuring that companies meet their ESG targets. Yet, it’s also an accelerant. AI requires vast computing power, which churns through energy when designing algorithms and training models. And just as software ate the world, AI is set to follow. AI will contribute as much as $15.7 trillion to the global economy by 2030, which is greater than the GDP of Japan, Germany, India and the UK. That’s a lot of people using AI as ubiquitously as the internet, from using ChatGPT to craft emails and write code to using text-to-image platforms to make art. The power that AI uses has been increasing for years now. For example, the power required to train the largest AI models doubled roughly every 3.4 months, increasing 300,000 times between 2012 and 2018. This expansion brings opportunities to solve major real-world problems in everything from security and medicine to hunger and farming.


Unleashing the Power of Data Insights: Denodo Platform & the New Tableau GPT capability

When the Denodo Platform and Tableau GPT are integrated, Tableau customers can unlock several key benefits, including: Data Unification: The Denodo Platform’s logical data management capabilities provide Tableau GPT with a unified view of data from diverse sources. By integrating data silos and disparate systems, organizations can access a comprehensive, holistic data landscape within Tableau. The elimination of manual data consolidation simplifies the process of accessing and analyzing data, accelerating insights and decision-making. This significantly reduces the need for manual effort and enhances efficiency in data management. Expanded Data Access: The Denodo Platform’s ability to connect to a wide range of data sources means Tableau GPT can leverage an extensive array of structured and unstructured data. With connections to over 200 data sources, the Denodo Platform lets organizations tap into a comprehensive, distributed data ecosystem as easily and simply as connecting to a single data source.


Importance of quantum computing for reducing carbon emissions

Quantum computers have been an exciting tech development in recent times. They are exponentially faster than classical computers which makes them suitable for several applications in a wide variety of areas. However, they are still in their nascent stage of development, and even the most sophisticated machines are limited to a few hundred qubits. There is also the inherent problem of random fluctuations or noise—the loss of information held by qubits. This is one of the chief obstacles in the practical implementation of quantum computers. As a result, it takes more time for these noisy intermediate-scale quantum computers to perform complex calculations. Even the most basic reaction of CO2 with the simplest amine, ammonia, turns out to be too complex for these NISQs. VQE utilises a quantum computer to estimate the energy of a quantum system, while using a classical computer to optimise and suggest improvements to the calculation. One possible remedy to this problem is to combine quantum and classical computers, to overcome the problem of noise in quantum algorithms. 


Master the small daily improvements that set great leaders apart

When people talk about authentic leadership, what they’re really looking for is someone who practices what they preach. You don’t have to be successful at everything you’ll ask others to try, but you’ll need to have tried it. You’ll also need to understand how and when certain skills work, and when they don’t. Consider making time to take care of yourself. We tell folks that it’s important to take vacation time to recharge their batteries, but do we do the same? I had a colleague who would take a big splashy vacation every year. He’d make sure to tell everyone that there was no cellphone reception where he was going for that week. The other 51 weeks of the year? He’d respond instantly to all communications and always follow up with questions, sending messages day or night, seven days a week. The clear subtext was that outside of disappearing for one week a year, there was no expectation of taking time away. His message about time away from the office rang hollow to everyone around him. Great leaders make a point of disappearing often to take care of themselves in visible ways. 


Unleashing the Power of AI-Engineered DevSecOps

Implementing an AI-engineered DevSecOps solution comes with several potential pitfalls that can derail the process if not appropriately managed. Here are a few of them, along with suggestions for how to avoid them: Inadequate Planning and Alignment with Business Goals: Ignoring the strategic alignment between implementing AI-engineered DevSecOps and overall business goals can lead to undesirable outcomes. Clearly define the business objectives and how AI-engineered DevSecOps supports them. Outline expected outcomes and key performance indicators (KPIs) that align with business goals to guide the initiative. Neglecting Training and Upskilling: AI tools can be complex, and without proper understanding and training, their deployment may not yield desired results. Invest in training your teams on AI-engineered DevSecOps tools and techniques. Ensure they understand the functionalities of these tools and how to effectively use them. Upskilling your team will be crucial for leveraging AI capabilities. Ignoring Change Management: Introducing AI into DevSecOps is a significant change that can disrupt workflows and resistance from the team members. 


Scientists conduct first test of a wireless cosmic ray navigation system

It's similar to X-ray imaging or ground-penetrating radar, except with naturally occurring high-energy muons rather than X-rays or radio waves. That higher energy makes it possible to image thick, dense substance. The denser the imaged object, the more muons are blocked. The Muographix system relies on four muon-detecting reference stations above ground serving as coordinates for the muon-detecting receivers, which are deployed either underground or underwater. The team conducted the first trial of a muon-based underwater sensor array in 2021, using it to detect the rapidly changing tidal conditions in Tokyo Bay. They placed ten muon detectors within the service tunnel of the Tokyo Bay Aqua-Line roadway, which lies some 45 meters below sea level. They were able to image the sea above the tunnel with a spatial resolution of 10 meters and a time resolution of one meter, sufficient to demonstrate the system's ability to sense strong storm waves or tsunamis. The array was put to the test in September of that same year, when Japan was hit by a typhoon approaching from the south, producing mild ocean swells and tsunamis.


Five Steps to Principle-based Technology Transformation

Enterprise architecture frameworks prescribe using a set of principles to guide and align all architectural decisions within a particular environment. But how does one get to that set of principles, and how does it help to achieve some desired end state? Still, I believe in principles – chosen at the right time, using the proper context. They are much like having values in life – they allow you to test and focus decisions in complex environments; and also, provide a mechanism to explain technology decisions to business people. As principles guide decisions for future actions, they must ensure achievement of the transformation goals. But how does one determine the starting point in a complex environment, and how does one define the endpoint in the ever-changing landscape? I found these questions very perplexing until I realised that the success of a technology architecture is not about using any specific system/solution – but more about the CHARACTERISTICS of the environment – required by the Business to grow, prosper and achieve its strategic objectives.



Quote for the day:

"Success is not a random act. It arises out of a predictable and powerful set of circumstances and opportunities." -- Malcolm Gladwell

Daily Tech Digest - June 17, 2023

Borderless Data vs. Data Sovereignty: Can They Co-Exist?

Businesses have long understood that data sharing has limits (or borders). Legal separations keep data from various subsidiaries distinct or limit sharing between partners to specific data types. Multi-tenant software applications often require logical partitions to keep customer data private. What is rapidly changing are new data sovereignty laws, often cloaked as "data privacy" regulations, that enforce geographic boundaries on where data is processed and stored. Businesses must comply with the laws of each country where they operate, and data sovereignty presents a clear compliance challenge as companies hurry to rethink how and where they safely acquire personal data to share and protect. Countries enacting regulations keeping personal data inside their borders may deem their citizens' data of strategic national importance. More commonly, it's an enforcement mechanism that acknowledges personal data as an asset owned by individuals that businesses must use and share according to that country's laws. Recent data sovereignty requirements cannot be easily bypassed or pushed to the consumer's consent.


All change: The new era of perpetual organizational upheaval

With upsets coming from all directions—whether they be supply chain disruptions, surging inflation, or spikes in interest rates and energy prices—companies need to focus on being prepared and ready to act at all times. The key is not just to bounce out of crises, but to bounce forward—landing on their feet relatively unscathed and racing ahead with new energy. ... But it’s raising huge questions: How can companies provide structure and support to all employees regardless of where they are? How do they address the potential risks to company culture and the sense of belonging, as well as to collaboration and innovation? The pandemic exacerbated other trends, including the continuing skills mismatch in the labor market, which the onward march of technology is intensifying. It threw a harsh light on the challenge of workplace motivation—sometimes referred to as the “great attrition,” with workers leaving their jobs, or quiet quitting, essentially downscaling their efforts on the job.


A guide to becoming a Chief Information Security Officer: Steps and strategies

The technical skills are a must-have. Know all about network security, cloud security, identity access management, adopting and adapting infrastructure, along with tools and technologies that allow for the preservation of organizational data privacy, integrity and computing availability. Security engineers who are interested in becoming CISOs often focus on problem hunting. CISOs need to not only be able to find problems, but to identify problems and vulnerabilities that aren’t apparent to those around them. Learning to ask the right kinds of questions and thinking about issues in unconventional ways take time and practice. CISOs need to continuously update their mental models when it comes to thinking about cyber security. The mental model required for on-premise cyber security implementation is different from that required for the cloud. As an increasing number of automation and AI-based tools emerge, mental models will again need to be retrofitted. Many aspiring CISOs sell their technical credentials to prospective employers. This is important. 


TinyML computer vision is turning into reality with microNPUs (µNPUs)

Digital image processing—as it used to be called—is used for applications ranging from semiconductor manufacturing and inspection to advanced driver assistance systems (ADAS) features such as lane-departure warning and blind-spot detection, to image beautification and manipulation on mobile devices. And looking ahead, CV technology at the edge is enabling the next level of human machine interfaces (HMIs). HMIs have evolved significantly in the last decade. On top of traditional interfaces like the keyboard and mouse, we have now touch displays, fingerprint readers, facial recognition systems, and voice command capabilities. While clearly improving the user experience, these methods have one other attribute in common—they all react to user actions. The next level of HMI will be devices that understand users and their environment via contextual awareness. Context-aware devices sense not only their users, but also the environment in which they are operating, all in order to make better decisions toward more useful automated interactions. 


Intel Announces Release of ‘Tunnel Falls,’ 12-Qubit Silicon Chip

“Tunnel Falls is Intel’s most advanced silicon spin qubit chip to date and draws upon the company’s decades of transistor design and manufacturing expertise. The release of the new chip is the next step in Intel’s long-term strategy to build a full-stack commercial quantum computing system. While there are still fundamental questions and challenges that must be solved along the path to a fault-tolerant quantum computer, the academic community can now explore this technology and accelerate research development.” — Jim Clarke, director of Quantum Hardware, Intel Why It Matters: Currently, academic institutions don’t have high-volume manufacturing fabrication equipment like Intel. With Tunnel Falls, researchers can immediately begin working on experiments and research instead of trying to fabricate their own devices. As a result, a wider range of experiments become possible, including learning more about the fundamentals of qubits and quantum dots and developing new techniques for working with devices with multiple qubits.


What bank leaders should know about AI in financial services

While this technology has many exciting potential use cases, so much is still unknown. Many of Finastra’s customers, whose job it is to be risk-conscious, have questions about the risks AI presents. And indeed, many in the financial services industry are already moving to restrict use of ChatGPT among employees. Based on our experience as a provider to banks, Finastra is focused on a number of key risks bank leaders should know about. Data integrity is table stakes in financial services. Customers trust their banks to keep their personal data safe. However, at this stage, it’s not clear what ChatGPT does with the data it receives. This begs the even more concerning question: Could ChatGPT generate a response that shares sensitive customer data? With the old-style chatbots, questions and answers are predefined, governing what’s being returned. But what is asked and returned with new LLMs may prove difficult to control. This is a top consideration bank leaders must weigh and keep a close pulse on. Ensuring fairness and lack of bias is another critical consideration. 


Are public or proprietary generative AI solutions right for your business?

Internal large language models are interesting. Training on the whole internet has benefits and risks — not everyone can afford to do that or even wants to do it. I’ve been struck by how far you can get on a big pre-trained model with fine tuning or prompt engineering. For smaller players, there will be a lot of uses of the stuff [AI] that’s out there and reusable. I think larger players who can afford to make their own [AI] will be tempted to. If you look at, for example, AWS and Google Cloud Platform, some of this stuff feels like core infrastructure — I don’t mean what they do with AI, just what they do with hosting and server farms. It’s easy to think ‘we’re a huge company, we should make our own server farm.’ Well, our core business is agriculture or manufacturing. Maybe we should let the A-teams at Amazon and Google make it, and we pay them a few cents per terabyte of storage or compute. My guess is only the biggest tech companies over time will actually find it beneficial to maintain their own versions of these [AI]; most people will end up using a third-party service. 


Governance in the Age of Technological Innovation

To keep abreast of technological change and innovation, the board needs to ensure that its innovation and risk agendas are up-to-date, and that innovation is incorporated into the organisation’s strategy review. This may involve reviewing key performance indicators, performance measures and incentives. Within the board, the appropriate composition, culture and interactions can promote innovation. Not all board directors will have the relevant technical expertise, but more diverse boards can build collective literacy and enhance human capital in the boardroom, said De Meyer. Where necessary, committees such as scientific or innovation committees can be created to drive greater attention to these topics. In these cases, naming matters, said Janet Ang, non-executive Chair of the Institute of Systems Science in the panel discussion. For instance, referring to a committee as “Technology and Risk” instead of narrowly naming it as “IT” gives it more weight and scope. Fundamentally, boards should not only strive for conformance but also performance, urged Su-Yen Wong, Chair of the Singapore Institute of Directors. 


Can You Renegotiate Your Cloud Bill by Refusing to Pay It?

Hyperscalers in cloud continue to face questions about the cost and reliability of their services, especially in light of the brief AWS outage on June 13 that affected Southwest Airlines, McDonald’s, and The Boston Globe along with others. Further, some organizations face regulatory requirements that preclude the use of the cloud for certain datasets and transactions, Katz says. “There’s really no one-size-fits-all answer because every manufacturer, every organization, every company has different requirements.” There can be times when a cloud-first approach does not make sense for organizations. Katz says his company worked with a client whose dataset is very transactional with lots of changes and database read-writes. “We ran an assessment for them and going off to the public cloud was going to be eight times more expensive a month than keeping it on prem.” ... Much of the market is pushing toward a cloud-first world, but the economics could become challenging in the future. “At some point in time, the cost of doing business in the cloud is going to be exponentially higher, usually, than if you were to buy a depreciating asset and then kick it to the curb,” Katz says.


Red teaming can be the ground truth for CISOs and execs

What red teams can give CISOs is the cold, hard truth of how their network stacks up against threats that could be ruinous to the business. Red teams leave no stone unturned and pull on every thread until it unravels. This shines light on the vulnerabilities that will harm the finances or reputation of the business. With a red team, objective-based continuous penetration testing (led by experts that know attackers’ best tricks) can relentlessly scrutinize the attack surface to explore every avenue that could lead to a breakthrough. This proactive, “offensive security” approach will give a business the most comprehensive picture of their attack surface that money can buy, mapping out every possibility available to an attacker and how it can be remediated. It is also not limited to testing the technology stack; for businesses concerned that their employees are susceptible to social engineering attacks, red teams can emulate social engineering scenarios as part of their testing. A stringent social engineering assessment program should not be overlooked in favor of only scrutinizing weaknesses in IT infrastructure. 



Quote for the day:

"Leadership is just another word for training." -- Lance Secretan

Daily Tech Digest - June 15, 2023

The five new foundational qualities of effective leadership

Today’s leaders have to be able to establish a compelling destination and then navigate through the fog with a compass. “You have to be ready to make a decision today, realizing that you may get new data tomorrow that means you have to reverse the decision you just made,” a veteran CEO of a Fortune 25 company told us. “You have to have the courage to follow that new information. The job’s always been ambiguous. But the environment has never been this fluid.” Boards and CEOs expect succession candidates to be adept at providing direction and key performance indicators that will signal whether course adjustments are necessary. “We’re living in an age with many more discontinuities than we had a generation or two ago,” said Mark Thompson, former CEO of the New York Times Company and now board chairman of Ancestry. “It’s not about trying to find the perfect strategies. It’s more about helping organizations to be more open, flexible, and adaptable to change.” This shift demands a more dynamic, individual leadership approach, as well as a reimagining of basic organizational processes. 


5 best practices to ensure the security of third-party APIs

Maintaining an API inventory that automatically updates as code changes is an instrumental first step for an API security program, says Jacob Garrison, a security researcher at Bionic. This is an instrumental first step for an API security program; it should distinguish between first-party and third-party APIs. And it encourages continuous monitoring for shadow IT — APIs brought on board without notifying the security team. “To ensure your inventory is robust and actionable, you should track which APIs transmit business-critical information, such as personally identifiable information and payment card data,” he says. An API inventory is complementary to third-party risk management, according to Garrison. When developers utilize third-party APIs, it’s worthwhile to consider risk assessments of the vendors themselves. ... Frank Catucci, chief technology and head of security research for Invicti Security, agrees that including an inventory of third-party APIs is critical. "You need to have third-party APIs be part of your overall API inventory and you have to look at them as assets that you own, that you are responsible for," he says


Generative AI’s change management challenge

“The hardest part of AI acceptance is creating a space where employees can still add value and not feel they are competing with AI to create value,” Bellefonds added. “A lot of the work we do when it comes to change management and coaching is to help employees work with AI and at the same time, change the way they add value, so that a part of their job is taken by AI but their part refocuses on higher value-adding tasks.” Exactly how those processes are rewired and the working methods changed will vary from one enterprise to another, he said. There are other ways in which employees’ concerns about AI is unevenly distributed, too. Leaders are more likely to be optimistic, and frontline workers concerned, BCG found. And while 68% of leaders believe their companies have implemented adequate measures to ensure responsible use of AI, only 29% of their frontline employees feel that way. Despite BCG’s findings of optimism in the workforce, there’s a darker side. Over one-third of respondents think their job is likely to be eliminated by AI, and almost four-fifths want governments to step in and deliver AI-specific regulations to ensure it’s used responsibly.


As Machines Take Over — What Will It Mean to Be Human?

Biocomputing is a field of study that uses biologically-based molecules, such as DNA or proteins, to perform computational tasks. Imitating the genius of nature can completely shift the paradigm of understanding when it comes to the computation and storage of data. The field has shown promise in cryptography and drug discovery. However, biocomputers are still limited compared to non-bio computers since they aren't good at cooling themselves and doing more than two things simultaneously. Advancements in AI, however, have been booming. Since 2012, interest in AI, especially in machine learning, has been renewed, leading to a dramatic increase in funding and investment. Machine learning models ingest large amounts of data and infer patterns. More recently, generative AI has become extremely popular with the release of large AI models such as MidJourney, ChatGPT and Stable Diffusion. Generative AI is a class of AI algorithms that generate new data or content extremely similar to existing data, nearly identical to human-made data.


What is SDN and where is is going?

There are three main components to a software-defined network: controller, applications, and devices. The controller has taken over the role of the control plane on each individual network device. It populates the tables that the data planes on those devices use to do their work. There are various communication protocols that can be used for this purpose, including OpenFlow, though some vendors use proprietary protocols. Communication between the controller and devices is referred to as southbound APIs. The software controller is, in turn, managed by applications, which can fulfill any number of network administration roles, including load balancers, software-defined security services, orchestration applications, or analytics applications that keep tabs on what's going on in the network. These applications communicate with the controller (northbound APIs) through well-documented REST APIs that allow applications from different vendors to communicate with ease. 


Using Trauma-Informed Approaches in Agile Environments

Software is, by definition, very abstract. For this reason, we naturally tend to be in our heads and thoughts most of the time while at work. However, a more trauma-informed approach requires us to pay more attention to our physical state and not just to our brain and cognition. Our body and its sensations are giving us many signs, vital not just to our well-being but also to our productivity and ability to cognitively understand each other and adapt to changes. Paradoxically, in the end, paying more attention to our physical and emotional state gives us more cognitive resources to do our work. Noticing our bodily sensations at the moment, like breath or muscle tension in a particular area, can be a first step to getting out of a traumatic pattern. And a generally higher level of body awareness can help us fall less into such patterns in the first place. Simplified - our body awareness anchors us in the here and now, making it easier for us to recognize past patterns as inadequate for the current situation.


How Pyramid Thinking Can Revolutionize Your Data Strategy

Before devising a corporate data strategy, the main things you need to know are the strategy and objectives of your organization as a whole. Data can be a truly transformative tool, but even the sharpest knife needs to be used accurately to get the best results -- which is why you need to know the end goal before you can understand how data can help you achieve it. This end goal forms the very peak of the pyramid and it is by looking downwards from it that you can understand the role that data can play. For organizations struggling to pinpoint that goal (as oftentimes happens when the business strategy isn’t well-defined and documented), it is worth considering key business problems and the consequent opportunities for improvement. ... Identifying business goals gives you the basis upon which to build your data strategy, and with that you can begin to be more specific about the change you are looking to make. An actionable and measurable formula helps you shape those changes with clarity, such as “we want to do x by measuring/tracking/analyzing y in order to do z.”


Network spending priorities for second-half 2023

Security is the area where most users expect to spend more, but at the same time an area where they believe their spending is most likely to be sub-optimal. Three-quarters of buyers think they already spend too much on security because they’ve layered things on without considering the whole picture. You hear terms like “holistic approach” or “rethinking” a lot in their comments, but at the same time, less than an eighth of the users expect to redo their security strategies in any way.  ... The reasons for the seemingly mindless AI enthusiasm is a simple reversal of an old saying: “Where there’s hope, there’s life.” AI could (theoretically) reduce operator errors. It could (hopefully) improve network capacity planning. It could (presumably) help secure applications and data and spot malefactors. All these things are recurring problems that seem to defy solution, and AI offers a hope that a solution might be near at hand. What’s not to love, provisionally of course.


Biodiversity Means Business

Technology can play a key role in navigating biodiversity issues. Predictive analytics, machine learning, digital twins, blockchain and the Internet of Things can deliver insight, visibility and measurability into sourcing, supply chains and environmental impacts. However, Katic emphasizes that these tools must be used to drive real change. “They must support a paradigm shift to new, sustainable models of development, rather than entrenching business as usual. They must deliver enhanced transparency and accountability,” she says. Ultimately, companies must imbed biodiversity deep into their business strategies and daily operations, Katic says. This includes the use of science based methods that revolve around the UN’s Sustainable Development Goals and its Global Biodiversity Framework. It can also incorporate tools such as the S&P’s scoring system, part of its UN-linked GlobalSustainable1 initiative, which provides dependency scores, ecosystem footprint insights, and other biodiversity data that can guide decision-making. In addition, the SBTN framework can serve as a valuable resource. More than 200 organizations helped shape the initial set of methods, tools, and guidance.


5 roadblocks to Rust adoption in embedded systems

Rust is not a trivial language to learn. While it does share common ideas and concepts with many of the languages that came before it, including C, the learning curve is steeper. When a company looks to adopt a new language, they hire engineers who already know the technology or are forced to train their team. Teams interested in using Rust for embedded will find themselves in a small, niche community. Within this community, not many qualified embedded software engineers know Rust. That means paying a premium for the few developers who know Rust or investing in training the existing internal team. Training a team to use Rust isn’t a bad idea. Every company and developer should be investing in themselves constantly. Our field changes so rapidly that you’ll quickly get left behind if you don’t. However, switching from one programming language to another must provide a return on investment for the company. Especially when switching to an immature language like Rust. 



Quote for the day:

"Don't focus so much on who is following you, that you forget to lead." -- E'yen A. Gardner