Daily Tech Digest - September 06, 2022

Taking Security Strategy to the Next Level: The Cyber Kill Chain vs. MITRE ATT&CK

There are 2 models that can help security professionals harden network resources and protect against modern-day threats and attacks: the cyber kill chain (CKC) and the MITRE ATT&CK framework. The CKC, developed by Lockheed Martin more than a decade ago, provides a high-level view of the sequence of a cyberattack from initial reconnaissance through weaponization and action. While it is widely used by security teams, it has its limitations. For example, host attack behaviors are not included in the model, and attackers may bypass or combine multiple steps. The newer MITRE ATT&CK framework maps closely to the CKC but focuses more on cyberresilience to withstand emergent threats. This open-source project also provides substantial support for tracing host attack behaviors. ... Present-day attacks utilize encryption over the network, making it very difficult to detect attack behaviors via the network itself. To overcome this limitation, enterprises typically deploy host security products alongside their network security products. Host security products might include traditional antivirus programs, endpoint detection and response (EDR) solutions or endpoint protection platforms (EPPs). 


Why Cloud Databases Need to Be In Your Tech Stack

Companies need to operate at a constantly increasing scale — more data, more speed, more customer touchpoints. IDC estimates that there will be 41.6 billion connected IoT devices, or “things,” generating 79.4 zettabytes (ZB) of data in 2025. The only way to keep up with this moving train is to have a cloud database that can handle huge amounts of data and can do so with extreme agility and low latency. There are two types of scaling: horizontal (adding more nodes to a system) and vertical (adding more resources to a single node). Relational databases of old are not elastic, as in they cannot scale based on the volume and velocity of data access. They are built more like airplanes. If you want to add 20 more seats to your flight, you have to get a new plane that is built with 20 more seats. In other words, you can’t extend this plane to accommodate 20 more passengers. This is vertical scaling. Cloud databases are built more like trains. If you want to add 20 more seats to your popular train route, all you have to do is add another coach. On the other hand, cloud databases are more like trains. If you want to add 20 more seats to your popular train route, all you have to do is add another coach.


Report: Organ Transplant Data Security Needs Strengthening

The newest criticism comes from a federal watchdog review of the Health Resources and Services Administration and the nonprofit United Network of Organ Sharing. As of January, nearly 107,000 individuals were candidates on the Organ Procurement and Transplantation Network waitlist. OPTN is designated by the federal government as a "high-value asset." UNOS, which manages its network at the administration's behest, lacked system monitoring and only had draft procedures for access controls when federal auditors conducted their review. The OPTN "is a very 'just in time' system where the time between an organ becoming available and getting it into the right patient can be measured in days or even hours," says Benjamin Denkers, chief innovation officer at consultancy CynergisTek. "Hackers breaching the system could create any number of disruptions to the system connecting available organs with patients in need." A statement from an UNOS spokeswoman shared with Information Security Media Group notes that auditors concluded that "OPTN security controls 'protect the confidentiality, integrity, and availability of transplant data.'"


Spinning uncertainty into success

The Upside of Uncertainty delivers helpful takeaways and, perhaps most important, offers anyone struggling with a murky future the courage to persevere. The book also contains useful insights into shifting one’s perspective in tough times, describing entrepreneurial heuristics that can help shrewd thinkers tap into potential opportunity. For example: pressing on when uncertainty emerges, even at the risk of failure; reframing failure as an opportunity for learning and adaptation; exploiting resources and skills at hand instead of investing too deeply in research before experimenting; and thinking entrepreneurially by leveraging existing resources in new ways. They cite the example of Pokémon Go, which was created by a multiplayer-game designer and digital mapping expert who’d helped create what became Google Maps. He realized that Google Maps’ geopositioning technology could be paired with Pokémon characters to form an engaging augmented reality game. Similarly, the founders of Traveling Spoon, a startup that connects food-focused travelers with local home cooks, saw entrepreneurial potential hiding in plain sight when a local woman shared a delicious homemade meal with them in Mexico.


Design For Security Now Essential For Chips, Systems

“There’s a real danger in security, because of its complications and being really hard to understand, to run into the equivalent of what in sustainability is called green-washing,” said Frank Schirrmeister, senior group director, solutions and ecosystem at Cadence. “This is ‘secure-washing,’ and while there may be government regulations, it’s all about customers in the commercial world. Semiconductor companies and system vendors have to serve their end customers, and for them it’s like selling insurance. You really didn’t know that you needed security until you ran into a real issue. That’s when they say, ‘If I just would have had insurance.’ But how to implement it is really an intricate issue, and it’s hard to understand from technology perspective. I fear it may be similar to a clean energy ‘Energy Star’ sticker on a washing machine, which may just mean, ‘Yes, I have documented processes.’ That’s why I think there’s a danger of secure-washing, where the end consumer is lulled into a sense that ‘this thing is secure,’ without really understanding what’s underneath, who confirmed it, and what the process was. That’s why standardization is crucial. But it also needs to be transparent.”


The risks of neglecting data governance

Data governance will make or break your organisation’s reputation. The impact of the brand degradation that businesses are likely to suffer once their lax approach to data protection is revealed could be significant. No one wants to transact with a business that will not protect their data. In fact, data protection is set to become the next ‘badge of honour’ for businesses. Whilst sustainability, diversity and fair trade have previously been accolades that customers look for when choosing which businesses to interact with, being a data guardian is a growing phenomenon. The reputational impact that a GDPR fine can have on a business is, therefore, huge and can result in significant customer loss. With the growth of competition in many markets, it is easy for customers to find an alternative. Financially, this loss will often amount to more than the fine itself. Such negligence can also have a negative impact on your supply chain. As with customers, partners, suppliers, and service providers will also choose not to work with organisations who fail to comply with standards such as GDPR.


Choosing the Right Cloud Infrastructure for Your SaaS Start-up

The first consideration is the company’s ability to manage the infrastructure, including the time required, whether humans are needed for the day-to-day management, and how resilient the product is to future changes. If the product is used primarily by enterprises and demands customization, then you may need to deploy the product multiple times, which could mean more effort and time from the infra admins. The deployment can be automated, but the automation process requires the product to be stable. ROI might not be good for an early-stage product. My recommendation in such cases would be to use managed services such as PaaS for infrastructure, managed services for the database/persistent, and FaaS—Serverless architecture for compute. ... And the key to fast development to release is to spend more time in coding and testing than in provisioning and deployments. Low-code and no-code platforms are good to start with. Serverless and FaaS are designed to solve these problems. If your system involves many components, building your own boxes will consume too much time and effort. Similarly, setting up Kubernetes will not make it faster.


Edge infrastructure: 7 key facts CIOs should know about security

There is no blanket security solution that will mitigate every risk – that’s true at the edge, in the cloud, and in your datacenter or corporate offices. Your IT stack has multiple layers; even a single application has multiple layers. Your security posture should, too. Edge computing boosts the case for a multi-layered approach to security. This whitepaper describes a layered approach to container and Kubernetes security. While the details may differ in an edge environment, the core concept here remains relevant: A well-planned mix (or layers) of processes, policies, and tools – that lean heavily on automation wherever possible – is vital to securing inherently distributed systems. ... “You have to ensure that you enforce security controls at the granularity of the edge location, and that any edge location that is breached can be isolated away without impacting all the other edge locations,” says Priya Rajagopal, director of product management at Couchbase. This is similar in concept to limiting “east-west” traffic and other forms of isolation and segmentation in container and Kubernetes security. There’s no such thing as zero risk – things happen. 


How to Optimize Your Organization for Innovation

Building a culture that encourages creativity usually requires starting small and supporting frequent iteration. “Be willing to try ideas and approaches that may not work,” suggests Christine Livingston, managing director in the emerging technology practice at business consulting firm Protiviti. Employee-led technology advisory teams and initiative groups allow staffers to feel a sense of ownership while finding solutions to complex issues, observes Susan Tweed, vice president of enterprise technologies at analytics, artificial Intelligence and data management software and services provider SAS. “People can participate in ways that maximizes their strengths,” she says. “Some participants may be great at throwing out ideas while others love the challenge of digging deep to validate the solutions identified as the best options.” Giving teams the freedom to experiment is essential. “When teams are offered the space to create, try, fail, and try again, they are given the opportunity to learn from those experiences and bring that insight into their next projects,” Hapanowicz says.


Protect the Pipe! Secure CI/CD Pipelines With a Policy-Based Approach

Improved security for production systems has forced attackers to look for other avenues. The improvements may be due to the increase in cloud and managed services and general security awareness and availability of tools. With the adoption of programmable infrastructure and Infrastructure-as-Code (IaC), build, and delivery systems now have access to production systems. This means a compromise in the build system can be used to access production systems and, in the case of a software vendor, access to customer environments. Applications are increasingly composed of hundreds of OSS and commercial components. This increases the application exposure and presents several ways to add malicious code to an application. All of these factors contributed to attackers shifting focus to Continuous Integration and Continuous Delivery (CI/CD) systems as an easier target to infiltrate multiple production systems. Therefore, it is essential that organizations give equal consideration to securing our CI/CD pipelines, just as they do their production workloads.



Quote for the day:

"Superlative leaders are fully equipped to deliver in destiny; they locate eternally assigned destines." -- Anyaele Sam Chiyson

Daily Tech Digest - September 05, 2022

How to handle a multicloud migration: Step-by-step guide

The first order of business is to determine exactly what you want out of a multicloud platform; what needs are in play, which functions and services should be relocated, which ones may or should stay in house, what constitutes a successful migration, and what advantages and pitfalls may arise? You may have a lead on a vendor offering incentives or discounts, or company regulations may prohibit another type of vendor or multicloud service, and this should be part of the assessment. The next step is to determine what sort of funding you have to work with and match this against the estimated costs of the new platform based on your expectations as to what it will provide you. There may be a per-user or per-usage fee, flat fees for services, annual subscriptions or specific support charges. It may be helpful to do some initial research on average multicloud migrations or vendors offering the services you intend to utilize to help provide finance and management a baseline as to what they should expect to allocate for this new environment, so there are no misconceptions or surprises regarding costs.


Intro to blockchain consensus mechanisms

Every consensus mechanism exists to solve a problem. Proof of Work was devised to solve the problem of double spending, where some users could attempt to transfer the same assets more than once. The first challenge for a blockchain network was thus to ensure that values were only transferred once. Bitcoin's developers wanted to avoid using a centralized “mint” to track all transactions moving through the blockchain. While such a mint could securely deny double-spend transactions, it would be a centralized solution. Decentralizing control over assets was the whole point of the blockchain. Instead, Proof of Work shifts the job of validating transactions to individual nodes in the network. As each node receives a transaction, it attempts the expensive calculation required to discover a rare hash. The resulting "proof of work" ensures that a certain amount of time and computing power were expended by the node to accept a block of transactions. Once a block is hashed, it is propagated to the network with a signature. Assuming it meets the criteria for validity, other nodes in the network accept this new block, add it to the end of the chain, and start work on the next block as new transactions arrive.


Data’s Struggle to Become an Asset

Data’s biggest problem is that it is intangible and malleable. How can you attach a value to something that is always changing, may disappear, and has no physical presence beyond the bytes it appropriates in a database? In many organizations, there are troves of data that are collected and never used. Data is also easy to accumulate. Collectively, these factors make it easy for corporate executives to view data as a commodity, and not as something of value. Researchers like Deloitte argue that data will never become an indispensable asset for organizations unless it can deliver tangible business results: “Finding the right project requires the CDO (chief data officer) to have a clear understanding of the organization's wants and needs,” according to Deloitte. “For example, while developing the US Air Force’s data strategy, the CDO identified manpower shortages as a critical issue. The CDO prioritized this limitation early on in the implementation of the data strategy and developed a proof of concept to address it.”


In The Face Of Recession, Investing In AI Is A Smarter Strategy Than Ever

Many business leaders make the mistake of overspending on RPA platforms, blinded by the promise of some future ROI. In reality, due to the need to customize RPA to every client, these decision-makers don’t actually know how long it will take to begin reaping the benefits—if they ever do. I, myself, have made this mistake in the past, spending far too much time and money on a tedious RPA solution that was intended to solve a customer success back-office function, only to find that after the overhead of managing it, the gains were marginal. If business leaders want to fully maximize their investments and reap quicker benefits, they’ll go one giant leap beyond automation, landing in the realm of autonomous artificial intelligence (AI). True AI solutions, which continually learn from a company’s data to become increasingly accurate with time, are the holy grail of ROI. Finance leaders are in a great position to lead the way within their own companies by implementing AI solutions in the accounting function. Across industries, these teams are sagging under the weight of endless, tedious accounting tasks, using outdated, ineffective technology and wasting significant time fixing human errors.


Top 8 Data Science Use Cases in The Finance Industry

Financial institutions can be vulnerable to fraud because of their high volume of transactions. In order to prevent losses caused by fraud, organizations must use different tools to track suspicious activities. These include statistical analysis, pattern recognition, and anomaly detection via machine/deep learning. By using these methods, organizations can identify patterns and anomalies in the data and determine whether or not there is fraudulent activity taking place. ... Tools such as CRM and social media dashboards use data science to help financial institutions connect with their customers. They provide information about their customers’ behavior so that they can make informed decisions when it comes to product development and pricing. Remember that the finance industry is highly competitive and requires continuous innovation to stay ahead of the game. Data science initiatives, such as a Data Science Bootcamp or training program, can be highly effective in helping companies develop new products and services that meet market demands. Investment management is another area where data science plays an important role. 


A Bridge Over Troubled Data: Giving Enterprises Access to Advanced Machine Learning

Thankfully, the smart data fabric concept removes most of these data troubles, bridging the gap between the data and the application. The fabric focuses on creating a unified approach to access, data management and analytics. It builds a universal semantic layer using data management technologies that stitch together distributed data regardless of its location, leaving it where it resides. A fintech organisation can build an API-enabled orchestration layer, using the smart data fabric approach, giving the business a single source of reference without the necessity to replace any systems or move data to a new, central location. Capable of in-flight analytics, more advanced data management technology within the fabric provides insights in real time. It connects all the data including all the information stored in databases, warehouses and lakes and provides the vital and seamless support for end-users and applications. Business teams can delve deeper into the data, using advanced capabilities such as business intelligence. 


Why You Should Start Testing in the Cloud Native Way

Consistently tracking metrics around QA and test pass/failure rates is so important when you’re working in global teams with countless different types of components and services. After all, without benchmarking, how can you measure success? TestKube does just that. Because it’s aware of the definition of all your tests and results, you can use it as a centralized place to monitor the pass/failure rate of your tests. Plus it defines a common result format, so you get consistent result reporting and analysis across all types of tests. ... If you run your applications in a non-serverless manner in the cloud and don’t use virtual machines, I’m willing to bet you probably use containers at this point and you might have faced the challenges of containerizing all your testing activities. Well, with cloud native tests in Testkube, that’s not necessary. You can just import your test files into Testkube and run them out of the box. ... Having restricted access to an environment that we need to test or tinker with is an issue that most of us face at some point in our careers.


Why IT leaders should prioritize empathy

It’s simple enough to practice empathy outside of work, but IT challenges make practicing empathy at work a bigger struggle. Fairly or unfairly, many customers expect technology to work 100 percent of the time. When it doesn’t, it falls on IT leaders to go into crisis mode. Considering many of these applications are mission-critical to the customer’s organizational performance, their reaction makes sense. An unempathetic employee in this situation would ignore the context behind a customer’s emotional response. They might go on the defensive or fail to address the customer’s concerns with urgency. A response like this can prove detrimental to customer loyalty and retention – it takes up to 12 positive customer experiences to make up for one negative experience. Every workplace consists of many different personality types and cultural backgrounds – all with different understandings of and comfort toward practicing empathy. Because of this diversity, aligning on a single company-wide approach to empathy is easier said than done. Yet if your organization fails to secure employee buy-in around the importance of empathy, you risk alienating your customers and letting employees who aren’t well-versed in empathetic communication hold you back.


What devops needs to know about data governance

Looking one step beyond compliance considerations, the next level of importance that drives data governance efforts is trust that data is accurate, timely, and meets other data quality requirements. Moses has several recommendations for tech teams. She says, “Teams must have visibility into critical tables and reports and treat data integrity like a first-class citizen. True data governance needs to go beyond defining and mapping the data to truly comprehending its use. An approach that prioritizes observability into the data can provide collective significance around specific analytics use cases and allow teams to prioritize what data matters most to the business.” Kirk Haslbeck, vice president of data quality at Collibra, shares several best practices that improve overall trust in the data. He says, “Trusted data starts with data observability, using metadata for context and proactively monitoring data quality issues. While data quality and observability establish that your data is fit to use, data governance ensures its use is streamlined, secure, and compliant. Both data governance and data quality need to work together to create value from data.”


The Power of AI Coding Assistance

“With AI-powered coding technology like Copilot, developers can work as before, but with greater speed and satisfaction, so it’s really easy to introduce,” explains Oege De Moor, vice president of GitHub Next. “It does help to be explicit in your instructions to the AI.” He explains that during the Copilot technical preview, GitHub heard from users that they were writing better and more precise explanations in code comments because the AI gives them better suggestions. “Users also write more tests because Copilot encourages developers to focus on the creative part of crafting good tests,” De Moor explains. “So, these users feel they write better code, hand in hand with Copilot.” He adds that it is, of course, important that users are made aware of the limitations of the technology. “Like all code, suggestions from AI assistants like Copilot need to be carefully tested, reviewed, and vetted,” he says. “We also continuously work to improve the quality of the suggestions made by the AI.” GitHub Copilot is built with Codex -- a descendent of GPT-3 -- which is trained on publicly available source code and natural language.



Quote for the day:

"Great Groups need to know that the person at the top will fight like a tiger for them." -- Warren G. Bennis

Daily Tech Digest - September 01, 2022

Cloud Applications Are The Major Catalysts For Cyber Attachks

Those cybersecurity threats have sky-high substantially in recent because criminals have built lucrative businesses from stealing data and nation-states have come to see cybercrime as an opportunity to acquire information, influence, and advantage over their rivals. This has made a path for potential catastrophic attacks such as the WannaCrypt ransomware campaign which was being displayed in recent headlines. This evolving threat landscape has begun to change the way customers view the cloud. “It was only a few years ago when most of my customer conversations started with, ‘I can’t go to the cloud because of security. It’s not possible,’” said Julia White, Microsoft’s corporate vice president for Azure and security. “And now I have people, more often than not, saying, ‘I need to go to the cloud because of security.’” It’s not an exaggeration to say that cloud computing is completely changing our society. It’s ending major industries such as the retail sector, enabling the type of mathematical computation that is uplifting an artificial intelligence revolution and even having a profound impact on how we communicate with friends, family, and colleagues.


Intel AI chief Wei Li: Someone has to bring today's AI supercomputing to the masses

As is often the case in technology, everything old is new again. Suddenly, says Li, everything in deep learning is coming back to the innovations of compilers back in the day. "Compilers had become irrelevant" in recent years, he said, an area of computer science viewed as largely settled. "But because of deep learning, the compiler is coming back," he said. "We are in the middle of that transition." In his his PhD dissertation at Cornell, Li developed a computer framework for processing code in very large systems with what are called "non-uniform memory access," or NUMA. His program refashioned code loops for the most amount of parallel processing possible. But it also did something else particularly important: it decided which code should run depending on which memories the code needed to access at any given time. Today, says Li, deep learning is approaching the point where those same problems dominate. Deep learning's potential is mostly gated not by how many matrix multiplications can be computed but by how efficiently the program can access memory and bandwidth.


Event Streaming and Event Sourcing: The Key Differences

Event streaming employs the pub-sub approach to enable more accessible communication between systems. In the pub-sub architectural pattern, consumers subscribe to a topic or event, and producers post to these topics for consumers’ consumption. The pub-sub design decouples the publisher and subscriber systems, making it easier to scale each system individually. The publisher and subscriber systems communicate through a message broker like Apache Pulsar. When a state changes or an event occurs, the producer sends the data (data sources include web apps, social media and IoT devices) to the broker, after which the broker relates the event to the subscriber, who then consumes the event. Event streaming involves the continuous flow of data from sources like applications, databases, sensors and IoT devices. Event streams employ stream processing, in which data undergoes processing and analysis during generation. This quick processing translates to faster results, which is valuable for businesses with a limited time window for taking action, as with any real-time application.


Big cloud rivals hit back over Microsoft licensing changes

In a nutshell, the changes that come into effect from October allow customers with Software Assurance or subscription licenses to use these existing licenses "to install software on any outsourcers' infrastructure" of their choice. But as The Register noted at the time, this specifically excludes "Listed Providers", a group that just happens to include Microsoft's biggest cloud rivals – AWS, Google and Alibaba – as well as Microsoft's own Azure cloud, in a bid to steer customers to Microsoft's partner network. ... These criticisms are not entirely new, and some in the cloud sector made similar points following Microsoft's disclosure of some of the licensing changes it intended to make back in May. One cloud operator who requested anonymity told The Register in June that Redmond's proposed changes fail to "move the needle" and ignore the company's "other problematic practices." Another AWS exec, Matt Garman, posted on LinkedIn in July that Microsoft's proposed changes did not represent fair licensing practice and were not what customers wanted.


Machine learning at the edge: The AI chip company challenging Nvidia and Qualcomm

Built on 16nm technology, the MLSoC’s processing system consists of computer vision processors for image pre- and post-processing, coupled with dedicated ML acceleration and high-performance application processors. Surrounding the real-time intelligent video processing are memory interfaces, communication interfaces, and system management — all connected via a network-on-chip (NoC). The MLSoC features low operating power and high ML processing capacity, making it ideal as a standalone edge-based system controller, or to add an ML-offload accelerator for processors, ASICs and other devices. The software-first approach includes carefully-defined intermediate representations (including the TVM Relay IR), along with novel compiler-optimization techniques. ... Many ML startups are focused on building only pure ML accelerators and not an SoC that has a computer-vision processor, applications processors, CODECs, and external memory interfaces that enable the MLSoC to be used as a stand-alone solution not needing to connect to a host processor. Other solutions usually lack network flexibility, performance per watt, and push-button efficiency – all of which are required to make ML effortless for the embedded edge.


Why CIOs Need to Be Even More Dominant in the C-Suite Right Now

“Now more than ever, we’re seeing a pressing demand for CIOs to deliver digital transformation that enables business growth to energize the top line or optimize operations to eliminate cost and help the bottom line,” says Savio Lobo, CIO of Ensono. This requires the CIO to have a deep understanding of the business and surface decisions that may influence these objectives. Large-scale digital solutions and capabilities, however, often cannot be implemented simultaneously, especially when they require significant change in how customers and staff engage with people and processes. This means ruthless prioritization decisions may need to be made that include what is moving forward at any given time and equally importantly, what is not. “While executing a large initiative, there will also be people, process and technology choices to be made and these need to be made in a timely manner,” Lobo adds. This may look unique for every organization but should include collaboration on the discovery and implementation and an open feedback loop for how systems and processes are working or not working in each stakeholder’s favor. 


Ensuring security of data systems in the wake of rogue AI

A ‘Trusted Computing’ model, like the one developed by the Trusted Computing Group (TCG), can be easily applied to all four of these AI elements in order to fully secure a rogue AI. Considering the data set element of an AI, a Trusted Platform Module (TPM) can be used to sign and verify that data has come from a trusted source. A hardware Root of Trust, such as the Device Identifier Composition Engine (DICE), can make sure that sensors and other connected devices maintain high levels of integrity and continue to provide accurate data. Boot layers within a system each receive a DICE secret, which combines the preceding secret on the previous layer with the measurement of the current one. This ensures that when there is a successful exploit, the exposed layer’s measurements and secrets will be different, securing data and protecting itself from any data disclosure. DICE also automatically re-keys the device if a flaw is unearthed within the device firmware. The strong attestation offered by the hardware makes it a great tool to discover any vulnerabilities in any required updates.


The Implication of Feedback Loops for Value Streams

The practical implication for software engineering management is to first address feedback loops that generate a lot of bugs/issues to get your capacity back. For example, if you have a fragile architecture or code of low maintainability that requires a lot of rework after any new change implementation, it is obvious that refactoring is necessary to regain engineering productivity; otherwise, engineering team capacity will be low. The last observation is that the lead time will depend on the simulation duration, the longer you run the value stream, the higher the number of lead times variants you will get. Such behavior is the direct implication of the value stream structure with the redo feedback loop and its probability distribution between the output queue and the redo queue. If you are an engineering manager who inherited legacy code with significant accumulated debt, it might be reasonable to consider incremental solution rewriting. Otherwise, the speed of delivery will be very slow forever, not only for the modernization time. The art of simplicity; greater complexity yields more variations which increase the probability of results occurring outside of acceptable parameters. 


Beat these common edge computing challenges

Realizing the benefits of edge computing depends on a thoughtful strategy and careful evaluation of your use cases, in part to ensure that the upside will dwarf the natural complexity of edge environments. “CIOs shouldn’t adopt or force edge computing just because it’s the trendy thing – there are real problems that it’s intended to solve, and not all scenarios have those problems,” says Jeremy Linden, senior director of product management at Asimily. Part of the intrinsic challenge here is that one of edge computing’s biggest problem-solution fits – latency – has sweeping appeal. Not many IT leaders are pining for slower applications. But that doesn’t mean it’s a good idea (or even feasible) to move everything out of your datacenter or cloud to the edge. “So for example, an autonomous car may have some of the workload in the cloud, but it inherently needs to react to events very quickly (to avoid danger) and do so in situations where internet connectivity may not be available,” Linden says. “This is a scenario where edge computing makes sense.” In Linden’s own work – Asimily does IoT security for healthcare and medical devices – optimizing the cost-benefit evaluation requires a granular look at workloads.


Tenable CEO on What's New in Cyber Exposure Management

Tenable wants to provide customers with more context around what threat actors are exploiting in the wild to both refine and leverage the analytics capabilities the company has honed, Yoran says. Tenable must have context around what's mission-critical in a customer's organization to help clients truly understand their risk and exposure rather than just add to their cyber noise, he adds. Tenable has spent more on vulnerability management-focused R&D over the past half-decade than its two closest competitors combined, which has allowed the firm to deliver differentiated capabilities, Yoran says. Unlike competitors who have expanded their offerings to include everything from logging and SIEM to EDR and managed security services, Yoran says Tenable has remained laser-focused on risk. "The three primary vulnerability management vendors have three very different strategies and they've been on divergent paths for a long time," Yoran says. "For us, the key to success has been and will continue to be that focus on helping people assess and understand risk."



Quote for the day:

"Get your facts first, then you can distort them as you please." -- Mark Twain

Daily Tech Digest - August 31, 2022

Beyond “Agree to Disagree”: Why Leaders Need to Foster a Culture of Productive Disagreement and Debate

The business imperative of nurturing a culture of productive disagreement is clear. The good news is that senior leaders can play a highly influential role in this regard. By integrating the concepts of openness and healthy debate into their own and their organization’s language they can institutionalize new norms. Their actions can help to further reset the rules of engagement by serving as a model for employees to follow. ... Leaders should incorporate the concept of productive debate into corporate value statements and the way they address colleagues, employees, and shareholders. Michelin, for example, built debate into its value statement. One of its organizational values is “respect for facts,” which it describes as follows: “We utilize facts to learn, honestly challenge our beliefs….” Another company that espouses debate as value is Bridgewater. Founder Ray Dalio ingrained principles and subprinciples such as “be radically open-minded” and “appreciate the art of thoughtful disagreement” in the investment management company’s culture.


Using technology to power the future of banking

Because I believe that anyone that wants to be a CIO or a CTO, particularly in the way that the industry is progressing, you need to understand technology. So, staying close to the technology and curious and wanting to solve those problems has helped me. But there's another part to it, too. In every one of my roles, there have been times when I've seen something that wasn't necessarily working and I had ideas and wanted to help, but it might’ve been outside of my responsibility. I've always leaned in to help, even though I knew that it was going to help someone else in the organization, because it was the right thing to do and it helped the company, it helped other people. So, it ended up building stronger relationships, but also building my skillset. I think that's been a part of my rise too, and it's something that's just incredibly powerful from a cultural perspective. That’s something that I love here. Everybody is in it together to work that way. But I also think that it just speaks volumes about an individual, and people gravitate to want to work with people that operate that way. 


Physics breakthrough could lead to new, more efficient quantum computers

According to the researchers, this technique for generating stable qubits could have massive implications for the entire field of quantum computing, but especially for scalability and noise-reduction: At this stage, our system faces mostly technical limitations, such as optical losses, finite cooperativity and imperfect Raman pulses. Even modest improvements in these respects would put us within reach of loss and fault tolerance thresholds for quantum error correction. It’ll take some time to see how well this experimental generation of qubits translates into an actual computation device, but there’s plenty of reason to be optimistic. There are numerous different methods by which qubits can be made, and each lends to its own unique machine architecture. The upside here is that the scientists were able to generate their results with a single atom. This indicates that the technique would be useful outside of computing. If, for example, it could be developed into a two-atom system, it could lead to a novel method for secure quantum communication.


Organizations security: Highlighting the importance of compliant data

When choosing a web data collection platform or network, it’s important that security professionals use a compliance-driven service provider to safeguard the integrity of their network and operations. Compliant data collection networks ensure that security operators have a safe and suitable environment in which to perform their work without being compromised by potential bad actors using the same network or proxy infrastructure. These data providers institute extensive and multifaceted compliance processes that include a number of internal as well as external procedures and safeguards, such as manual reviews and third-party audits, to identify non-compliant active patterns and ensure that all use of the network follows the overall compliance guidelines. This of course also includes abiding by the data gathering guidelines established by international regulators, such as the European Union and the US State of California, as well as enforcing others who follow public web scraping best practices for compliant and reliable web data scraping or collection.


TensorFlow, PyTorch, and JAX: Choosing a deep learning framework

It’s not like TensorFlow has stood still for all that time. TensorFlow 1.x was all about building static graphs in a very un-Python manner, but with the TensorFlow 2.x line, you can also build models using the “eager” mode for immediate evaluation of operations, making things feel a lot more like PyTorch. At the high level, TensorFlow gives you Keras for easier development, and at the low-level, it gives you the XLA optimizing compiler for speed. XLA works wonders for increasing performance on GPUs, and it’s the primary method of tapping the power of Google’s TPUs (Tensor Processing Units), which deliver unparalleled performance for training models at massive scales. Then there are all the things that TensorFlow has been doing well for years. Do you need to serve models in a well-defined and repeatable manner on a mature platform? TensorFlow Serving is there for you. Do you need to retarget your model deployments for the web, or for low-power compute such as smartphones, or for resource-constrained devices like IoT things? TensorFlow.js and TensorFlow Lite are both very mature at this point. 


IoT Will Power Itself – Power Electronics News

Energy harvesting is nothing new, with solar power being one of the most famous examples. Solar energy works well for powering parking meters, but if we’re going to bring online the packaging and containers that are at the heart of our supply chains—things that are indoors and stacked on top of each other—we need another solution. The technology that gives mundane things like transporting cash registers both their intelligence and energy-harvesting power are small, inexpensive, brand-size computers printed as stickers and affixed to cash registers, sweater tags, vaccine vials, or other items racing in the global supply chain. These sticker tags, called IoT Pixels, include an ARM processor, a Bluetooth radio, sensors, and a security module — basically a complete system-on-a-chip (SoC). All that remains is to power this tiny SoC in the most efficient and economical way possible. It turns out that as wireless networks permeate our lives and radio frequency (RF) activity is everywhere, the prospect of recycling that RF activity into energy is the most practical and ubiquitous solution.


CoAuthor: Stanford experiments with human-AI collaborative writing

CoAuthor is based on GPT-3, one of the recent large language models from OpenAI, trained on a massive collection of already-written text on the internet. It would be a tall order to think a model based on existing text might be capable of creating something original, but Lee and her collaborators wanted to see how it can nudge writers to deviate from their routines—to go beyond their comfort zone (e.g., vocabularies that they use daily)—to write something that they would not have written otherwise. They also wanted to understand the impact such collaborations have on a writer’s personal sense of accomplishment and ownership. “We want to see if AI can help humans achieve the intangible qualities of great writing,” Lee says. Machines are good at doing search and retrieval and spotting connections. Humans are good at spotting creativity. If you think this article is written well, it is because of the human author, not in spite of it. ... The goal, Lee says, was not to build a system that can make humans write better and faster. Instead, it was to investigate the potential of recent large language models to aid in the writing process and see where they succeed and fail. 


LastPass source code breach – do we still recommend password managers?

The breach itself actually happened two weeks before that, the company said, and involved attackers getting into the system where LastPass keeps the source code of its software. From there, LastPass reported, the attackers “took portions of source code and some proprietary LastPass technical information.” We didn’t write this incident up last week, because there didn’t seem to be a lot that we could add to the LastPass incident report – the crooks rifled through their proprietary source code and intellectual property, but apparently didn’t get at any customer or employee data. In other words, we saw this as a deeply embarrassing PR issue for LastPass itself, given that the whole purpose of the company’s own product is to help customers keep their online accounts to themselves, but not as an incident that directly put customers’ online accounts at risk. However, over the past weekend we’ve had several worried enquiries from readers (and we’ve seen some misleading advice on social media), so we thought we’d look at the main questions that we’ve received so far.


FBI issues alert over cybercriminal exploits targeting DeFi

The FBI observed cybercriminals exploiting vulnerabilities in smart contracts that govern DeFi platforms in order to steal investors’ cryptocurrency. In a specific example, the FBI mentioned cases where hackers used a “signature verification vulnerability” to plunder $321 million from the Wormhole token bridge back in February. It also mentioned a flash loan attack that was used to trigger an exploit in the Solana DeFi protocol Nirvana in July. However, that’s just a drop in a vast ocean. According to an analysis from blockchain security firm CertiK, since the start of the year, over $1.6 billion has been exploited from the DeFi space, surpassing the total amount stolen in 2020 and 2021 combined. While the FBI admitted that “all investment involves some risk,” the agency has recommended that investors research DeFi platforms extensively before use and, when in doubt, seek advice from a licensed financial adviser. The agency said it was also very important that the platform's protocols are sound and to ensure they have had one or more code audits performed by independent auditors.


Privacy and security issues associated with facial recognition software

Facial recognition technology in surveillance has improved dramatically in recent years, meaning it is quite easy to track a person as they move about a city, he said. One of the privacy concerns about the power of such technology is who has access to that information and for what purpose. Ajay Mohan, principal, AI & analytics at Capgemini Americas, agreed with that assessment. “The big issue is that companies already collect a tremendous amount of personal and financial information about us [for profit-driven applications] that basically just follows you around, even if you don’t actively approve or authorize it,” Mohan said. “I can go from here to the grocery store, and then all of a sudden, they have a scan of my face, and they’re able to track it to see where I’m going.” In addition, artificial intelligence (AI) continues to push the capabilities of facial recognition systems in terms of their performance, while from an attacker perspective, there is emerging research leveraging AI to create facial “master keys,” that is, AI generation of a face that matches many different faces, through the use of what’s called Generative Adversarial Network techniques, according to Lewis.



Quote for the day:

"If you don't demonstrate leadership character, your skills and your results will be discounted, if not dismissed." -- Mark Miller

Daily Tech Digest - August 30, 2022

The Great Resignation continues, and companies are finding new ways to tackle the talent shortage

The Great Resignation is far from over. According to a study of 1,000 hiring managers in the US, 60% are struggling to find quality talent needed to fill open roles, with many now turning to freelance workers to bridge the growing skills gap. According to Upwork's most recent Future Workforce Report, 56% of companies that hire freelance workers hired freelancers at an increased rate within the last year. Companies are seeking out skilled independent workers to fill empty positions to compensate for the ongoing loss of talent, particularly in data science, accounting, and IT departments. Many companies are still feeling the burn of the COVID-19 pandemic and its effect on job trends. The ongoing tendency for workers to quit their jobs in search of better opportunities is persistent, and tech workers have proved particularly difficult to hire. Hiring managers surveyed by Upwork said data science and analytics roles would be the hardest to hire for over the next six months (60%), followed by architecture and engineering (58%) and IT & networking (58%).


Serverless Is the New Timeshare

There’s one great use case I can think of: webhooks. Getting the duct tape code for webhooks is always a pain. They don’t trigger often and dealing with that is a chore. Using a serverless function to just add stuff to the database and do the work can be pretty simple. Since a callback is hard to debug anyway, the terrible debugging experience in serverless isn’t a huge hindrance. But for every other use case, I’m absolutely baffled. People spend so much time checking and measuring throughput yet just using one slightly larger server and only local calls will yield more throughput than you can possibly need. Without all the vendor tie-ins that we fall into. Hosting using Linode, Digital Ocean, etc. would save so much money. On the time-to-market aspect, just using caching and quick local tools would be far easier than anything you can build in the cloud. Containers are good progress and they made this so much simpler, yet we dropped the ball on this and went all in on complexity with stuff like Kubernetes. Don’t get me wrong. K8s are great. 


The 6 most overhyped technologies in IT

These CIOs say that metaverse enthusiasts, including vendors who have a stake in its promotion, have created a sense that this technology will have us all living in a new digital realm. Most aren’t buying it. “Could it turn out to be great? Well, possibly. But so many other things have to change in order for that to work,” says Bob Johnson, CIO of The American University of Paris, who extended his comments to include the related technologies of extended reality (XR), virtual reality (VR), and augmented reality (AR). “They have some wonderful applications, but they don’t change the way we live.” ... CIOs also labeled blockchain as overhyped, noting that the technology has failed to be as transformative or even as useful as hoped nearly a decade into its use. “Initially, the name ‘blockchain’ sounded pretty cool and quickly became a buzzword that drew interest and peeked curiosities,” says Josh Hamit, senior vice president and CIO of Altra Federal Credit Union and a member of ISACA’s Emerging Trends Working Group. “However, in actual practice, it has proved more difficult for many organizations to identify tangible use cases for blockchain, or distributed ledger as it is also known.”


How fusing power and process automation delivers operational resilience

The integration of power and process is a catalyst for operational resilience and improved sustainability across the lifecycle of the plant. This integrated, digitalised approach drives Electrical, Instrumentation and Control (EI&C) CAPEX reductions up to 20% and OPEX efficiencies, including decreased unplanned downtime up to 15%, in addition to improving bottom line profitability by three points. End users see energy procurement cost reductions of 2-5% and carbon footprint reductions of 7 – 12% when implementing these strategies. It offers a comprehensive view of asset performance management, energy management, and the value chain from design through construction, commissioning, operations, and maintenance. When undergoing such an integration effort, implementing the right strategies can improve operational resilience for better anticipation, prevention, recovery from, and adaptability to market dynamics and events. This plant-wide data collection, reliable control and command exchange between systems, operators and control room will empower the workforce with clear and verified decision-making.


Multi-stage crypto-mining malware hides in legitimate apps with month-long delay trigger

Once the user downloads and installs an app, the deployment of malicious payloads doesn't happen immediately, which is a strategy to avoid detection. First, the app installer, which is built with a free tool called Inno Setup, reaches out to the developer's website and downloads a password-protected RAR archive that contains the application files. These are deployed under the Program Files (x86)\Nitrokod\[application name] path. The app then checks for the presence of a component called update.exe. If it's not found, it deploys it under the Nitrokod folder and sets up a system scheduled task to execute it after every restart. The installer then collects some information about the victim's system and sends it to the developer's server. Up to this point, the installation is not very unusual for how a legitimate application would behave: collecting some system data for statistics purposes and deploying what looks like an automatic update component. However, after around four system restarts on four different days, update.exe downloads and deploys another component called chainlink1.07.exe.


The new work–life balance

The pandemic seemed to render work–life balance a laughable concept. As white-collar workers set up workstations at home, there was no longer a separation of job and personal time or space. So we need something new, something more useful, to help us think about balance in our lives. Here’s an alternative model. ... There is no right mix, per se, and each individual’s outlook will change over time. When we are in our 20s, we can indulge in more of what we want to do. The same is true later in life, when personal interests can be prioritized. It’s those decades of our 30s, 40s, and 50s that can be particularly challenging—raising a family and building a career, which will include jobs that are stepping stones to more fulfilling roles. These chapters of life gave rise to the widely cited U-shaped happiness curve. To me, that three-part pie chart is useful in determining whether we feel a sense of balance in our lives. And it also helps explain some of the meta-narratives of the moment, including the “great resignation” and the persistent desire of employees to work from home. All that time alone during pandemic lockdowns gave people time to consider the meaning of life and prompted many to quit unrewarding jobs.


Edge computing: 4 considerations for success

Automation is usually accomplished through automation workflows close to the edge endpoints and a centralized control layer. Localized execution guards against high latency and connection disruptions, while centralized control provides integrated control of the entire edge environment. ... The edge can become a bit like the Wild West if you let it. Even with automation and management systems in place, it still takes an architectural commitment to maintain a high degree of consistency across the edge (and datacenter) environment. One rationale for a lack of consistency is that devices at the periphery are often smaller and less powerful than servers in a data center. The reasoning then follows that they need to run different software. But this isn’t necessarily the case – or at least, it isn’t the whole story. You can build system images from the small core Linux operating system you run elsewhere and customize it to add exactly what you need in terms of drivers, extensions, and workloads. Images can then be versioned, tested, signed, and deployed as a unit, so your ops team can know exactly what is running on the devices.


How Observability Can Help Manage Complex IT Networks

“Everything in computing is difficult for humans to see, simply because humans are so much slower than any computer,” Morgan says. “Almost anything we can do to provide visibility into what’s really happening inside the application can be a big help in understanding.” This means not just fixing things that break, but improving things that are working, or explaining them to users and new developers. He points to the oldest observability tool, ad-hoc logging -- still in use today -- but adds tools like distributed tracing can provide a standard layer of visibility into the entire application without requiring application changes. This in turn reduces the burden on developers (less code to write) and on support staff (fewer distinct things to learn). “As an industry, we’ve created many tools for observability over the years, from print statements to distributed tracing,” Morgan says. “Network analytics bring a welcome uniformity to observability.” He adds that at a certain level, network traffic is the same no matter what the application is doing, so you can easily get equivalent transparency for every service in your application.


As States Ban Ransom Payments, What Could Possibly Go Wrong?

Victims may not know exactly what all ransomware attackers have encrypted or stolen, and finding out may take substantial time and energy. Likewise, negotiators can sometimes reduce the ransom being demanded by a large factor. In some cases, attackers may also provide a decryptor without a victim having to pay. Perhaps state legislators are attempting to look tough by essentially telling ransomware gangs to look elsewhere. No doubt they also don't want the political baggage associated with spending taxpayer money to enrich criminals. "A ransomware payment to the evil 'insert one of four known protagonists'-affiliated cybercriminals for multimillion-dollar amounts is bad optics at the political level when infrastructure is crumbling, inflation is climbing and social services such as policing and justice, healthcare, and other government services are under immense strain and financial pressure," says Ian Thornton-Trump , CISO of Cyjax. Previously, he says, many victims could pay for cleanup - and sometimes the ransom payment - using their cyber insurance or by making a business-disruption claim. 


Outdated infrastructure not up to today’s ransomware challenges

Challenges pertaining to outdated infrastructure could easily be compounded by the fact that many IT and security teams don’t seem to have a plan in place to mobilize if and when a cyber attack occurs. Nearly 60% of respondents expressed some level of concern that their IT and security teams would be able to mobilize efficiently to respond to the attack. These are just some of the findings from an April 2022 survey, conducted by Censuswide, of more than 2,000 IT and SecOps professionals (split nearly 50/50 between the two groups) in the United States, the United Kingdom, Australia and New Zealand. All respondents play a role in the decision-making process for IT or security within their organizations. “IT and security teams should raise the alarm bell if their organization continues to use antiquated technology to manage and secure their most critical digital asset – their data,” said Brian Spanswick, CISO, Cohesity. “Cyber criminals are actively preying on this outdated infrastructure as they know it was not built for today’s dispersed, multicloud environments, nor was it built to help companies protect and rapidly recover from sophisticated cyberattacks.”



Quote for the day:

"Speaking about it and doing it are not the same thing." -- Gordon Tredgold

Daily Tech Digest - August 29, 2022

6 key board questions CIOs must be prepared to answer

The board wants assurances that the CIO has command of tech investments tied to corporate strategy. “Demystify that connection,” Ferro says. “Show how those investments tie to the bigger picture and show immediate return as much as you can.” Global CIO and CDO Anupam Khare tries to educate the board of manufacturer Oshkosh Corp. in his presentations. “My slide deck is largely in the context of the business so you can see the benefit first and the technology later. That creates curiosity about how this technology creates value,” Khare says. “When we say, ‘This project or technology has created this operating income impact on the business,’ that’s the hook. Then I explain the driver for that impact, and that leads to a better understanding of how the technology works.” Board members may also come in with technology suggestions of their own that they hear about from competitors or from other boards they’re on. ... Avoid the urge to break out technical jargon to explain the merits of new cloud platforms, customer-facing apps, or Slack as a communication tool, and “answer that question from a business context, not from a technology context,” Holley says. “


From applied AI to edge computing: 14 tech trends to watch

Mobility has arrived at a “great inflection” point — a shift towards autonomous, connected, electric and smart technologies. This shift aims to disrupt markets while improving efficiency and sustainability of land and air transportation of people and goods. ACES technologies for road mobility saw significant adoption during the past decade, and the pace could accelerate because of sustainability pressures, McKinsey said. Advanced air-mobility technologies, on the other hand, are either in pilot phase — for example, airborne-drone delivery — or remain in the early stages of development — for example, air taxis — and face some concerns about safety and other issues. Overall, mobility technologies, which attracted $236bn last year, intend to improve the efficiency and sustainability of land and air transportation of people and goods. ... It focuses on the use of goods and services that are produced with minimal environmental impact by using low carbon technologies and sustainable materials. At a macro level, sustainable consumption is critical to mitigating environmental risks, including climate change. 


Why Memory Enclaves Are The Foundation Of Confidential Computing

Data encryption has been around for a long time. It was first made available for data at rest on storage devices like disk and flash drives as well as data in transit as it passed through the NIC and out across the network. But data in use – literally data in the memory of a system within which it is being processed – has not, until fairly recently, been protected by encryption. With the addition of memory encryption and enclaves, it is now possible to actually deliver a Confidential Computing platform with a TEE that provides data confidentiality. This not only stops unauthorized entities, either people or applications, from viewing data while it is in use, in transit, or at rest. ... It effectively allows enterprises in regulated industries as well as government agencies and multi-tenant cloud service providers to better secure their environments. Importantly, Confidential Computing means that any organization running applications on the cloud can be sure that any other users of the cloud capacity and even the cloud service providers themselves cannot access the data or applications residing within a memory enclave.


Metasurfaces offer new possibilities for quantum research

Metasurfaces are ultrathin planar optical devices made up of arrays of nanoresonators. Their subwavelength thickness (a few hundred nanometers) renders them effectively two-dimensional. That makes them much easier to handle than traditional bulky optical devices. Even more importantly, due to the lesser thickness, the momentum conservation of the photons is relaxed because the photons have to travel through far less material than with traditional optical devices: according to the uncertainty principle, confinement in space leads to undefined momentum. This allows for multiple nonlinear and quantum processes to happen with comparable efficiencies and opens the door for the usage of many new materials that would not work in traditional optical elements. For this reason, and also because of being compact and more practical to handle than bulky optical elements, metasurfaces are coming into focus as sources of photon pairs for quantum experiments. In addition, metasurfaces could simultaneously transform photons in several degrees of freedom, such as polarization, frequency, and path.


Agile: Starting at the top

Having strong support was key to this change in beliefs among the leadership team. Aisha Mir, IT Agile Operations Director for Thales North America, has a track record of successful agile transformations under her belt and was eager to help the leadership team overcome any initial hurdles. “The best thing I saw out of previous transformations I’ve been a part of was the way that the team started working together and the way they were empowered. I really wanted that for our team,” says Mir. “In those first few sprints, we saw that there were ways for all of us to help each other, and that’s when the rest of the team began believing. I had seen that happen before – where the team really becomes one unit and they see what tasks are in front of them – and they scrum together to finish it.” While the support was essential, one motivating factor helped them work through any challenge in their way: How could they ask other parts of the IT organization to adopt agile methodologies if they couldn’t do it themselves? “When we started, we all had some level of skepticism but were willing to try it because we knew this was going to be the life our organization was going to live,” says Daniel Baldwin


AutoML: The Promise vs. Reality According to Practitioners

The data collection, data tagging, and data wrangling of pre-processing are still tedious, manual processes. There are utilities that provide some time savings and aid in simple feature engineering, but overall, most practitioners do not make use of AutoML as they prepare data. In post-processing, AutoML offerings have some deployment capabilities. But Deployment is famously a problematic interaction between MLOps and DevOps in need of automation. Take for example one of the most common post-processing tasks: generating reports and sharing results. While cloud-hosted AutoML tools are able to auto-generate reports and visualizations, our findings show that users are still adopting manual approaches to modify default reports. The second most common post-processing task is deploying models. Automated deployment was only afforded to users of hosted AutoML tools and limitations still existed for security or end user experience considerations. The failure of AutoML to be end-to-end can actually cut into the efficiency improvements.


Best Practices for Building Serverless Microservices

There are two schools of thoughts when it comes to structuring your repositories for an application: monorepo vs multiple repos. A monorepo is a single repository that has logical separations for distinct services. In other words, all microservices would live in the same repo but would be separated by different folders. Benefits of a monorepo include easier discoverability and governance. Drawbacks include the size of the repository as the application scales, large blast radius if the master branch is broken, and ambiguity of ownership. On the flip side, having a repository per microservice has its ups and downs. Benefits of multiple repos include distinct domain boundaries, clear code ownership, and succinct and minimal repo sizes. Drawbacks include the overhead of creating and maintaining multiple repositories and applying consistent governance rules across all of them. In the case of serverless, I opt for a repository per microservice. It draws clear lines for what the microservice is responsible for and keeps the code lightweight and focused. 


5 Super Fast Ways To Improve Core Web Vitals

High-quality images consume more space. When the image size is big, your loading time will increase. If the loading time increases, the user experience will be affected. So, keeping the image size as small as possible is best. Compress the image size. If you have created your website using WordPress, you can use plugins like ShortPixel to compress the image size. If not, many online sites are available to compress image size. However, you might have a doubt - does compression affect the quality of the image? To some extent, yes, it will damage the quality, but only it will be visible while zooming in on the image. Moreover, use JPEG format for images and SVG format for logos and icons. It is even best if you can use WebP format. ... One of the important metrics of the core web vitals is the Cumulative Layout shift. Imagine that you're scrolling through a website on your phone. You think that it is all set to engage with it. Now, you see a text which has a hyperlink that has grasped your interest, and you're about to click it. When you click it, all of a sudden, the text disappears, and there is an image in the place of the text. 


Cyber-Insurance Firms Limit Payouts, Risk Obsolescence

While the insurers' position is understandable, businesses — which have already seen their premiums skyrocket over the past three years — should question whether insurance still mitigates risk effectively, says Pankaj Goyal, senior vice president of data science and cyber insurance at Safe Security, a cyber-risk analysis firm. "Insurance works on trust, [so answer the question,] 'will an insurance policy keep me whole when a bad event happens?' " he says. "Today, the answer might be 'I don't know.' When customers lose trust, everyone loses, including the insurance companies." ... Indeed, the exclusion will likely result in fewer companies relying on cyber insurance as a way to mitigate catastrophic risk. Instead, companies need to make sure that their cybersecurity controls and measures can mitigate the cost of any catastrophic attack, says David Lindner, chief information security officer at Contrast Security, an application security firm. Creating data redundancies, such as backups, expanding visibility of network events, using a trusted forensics firm, and training all employees in cybersecurity can all help harden a business against cyberattacks and reduce damages.


Data security hinges on clear policies and automated enforcement

The key is to establish policy guardrails for internal use to minimize cyber risk and maximize the value of the data. Once policies are established, the next consideration is establishing continuous oversight. This component is difficult if the aim is to build human oversight teams, because combining people, processes, and technology is cumbersome, expensive, and not 100% reliable. Training people to manually combat all these issues is not only hard but requires a significant investment over time. As a result, organizations are looking to technology to provide long-term, scalable, and automated policies to govern data access and adhere to compliance and regulatory requirements. They are also leveraging these modern software approaches to ensure privacy without forcing analysts or data scientists to “take a number” and wait for IT when they need access to data for a specific project or even everyday business use. With a focus on establishing policies and deciding who gets to see/access what data and how it is used, organizations gain visibility into and control over appropriate data access without the risk of overexposure. 



Quote for the day:

"Leadership is a journey, not a destination. It is a marathon, not a sprint. It is a process, not an outcome." -- John Donahoe