Showing posts with label NFT. Show all posts
Showing posts with label NFT. Show all posts

Daily Tech Digest - September 20, 2024

The New Normal in Disaster Recovery: Preparing for Ransomware Attacks Takes a New Approach

Early detection of ransomware can be difficult due to sophisticated malware that operates stealthily, attacks occurring outside business hours, and the scale of large, complex networks. Rapid containment prevents further spread but requires quick decision-making to isolate systems without disrupting critical operations. Tracing the initial point of entry and identifying all compromised systems is complex and time-consuming but essential to prevent reinfection. Isolated recovery environments (IREs) or cleanrooms provide secure, isolated environments for data recovery and system rebuilding, designed to prevent reinfection during the recovery process. ... To protect against data loss, organizations of all types need to implement immutable and air-gapped backups using write-once-read-many (WORM) technology and physically or logically isolating backup systems from the main network. Increasing backup frequency and redundancy is also advised, along with diversifying backup storage and maintaining multiple versions of backups with appropriate retention policies.


Big Tech criticises EU’s AI regulation – is it justified?

An open letter singed by various Big Tech leaders – including Patrick Collison and Meta’s Mark Zuckerberg – claims Europe is becoming less competitive and innovative than other regions due to “inconsistent regulatory decision making”. This letter follows a report from former Italian prime minister Mario Draghi, which called for an annual spending boost of €800bn to prevent a “slow and agonising decline” economically. But the Big Tech warning also follows issues for these companies to train their AI models with the data of EU citizens using their services. ... But the letter also says the EU’s current regulation means the bloc risks missing out on “open” AI models and the latest “multimodal” models that can operate across text, images and speech. The letter says companies are going to invest heavily into AI models for European citizens, then they need “clear rules” that enable the use of European data. “But in recent times, regulatory decision making has become fragmented and unpredictable, while interventions by the European Data Protection Authorities have created huge uncertainty about what kinds of data can be used to train AI models,” the letter reads. 


Innovation: What is next?

Innovations in technology that prioritize environmental sustainability may offer potential solutions. However, the solution is not as straightforward as depending solely on temporary fixes and implementing a small number of innovative strategies. The analysis shows India’s green technology potential and innovation, particularly in wind, solar, geothermal, ocean, hydro, biomass, and waste energy. However, patenting activity has plateaued in recent years, indicating the need for a strategic approach to green technology innovation in India. ... Increasing private sector investment confidence and working with industry and universities can also make big changes. Moreover, through the strategic utilization of geo-political advantages and the establishment of a vibrant and cooperative environment, India has the potential to significantly advance its green technology industry and make substantial contributions to international endeavors aimed at addressing climate change, all the while promoting economic development. ... Further, deep-tech innovation and a focus on product creation in underserved markets can turn out to be a game changer for India. According to Nasscom, the start-up ecosystem will add 250 scale- ups in tech, logistics, automotive, fintech, and health tech by 2025.


What Lawyers Want You to Know About NFTs

"To avoid legal trouble, sellers of NFTs should make sure that they either own the copyright in the work of art associated with the NFT, or that they have the permission of the copyright owner to make and sell NFTs of the artwork,” says Tyler Ochoa, professor of law at Santa Clara University School of Law. “They should also avoid incorporating any other works of art or any trademarks that are owned by others. And if more than one person is involved in the project, such as an artist and an entrepreneur, they should clearly specify the rights and responsibilities of all parties to the project, and the division of any profits, in a signed, written agreement.” ... Trademark infringement is another significant concern. The Wright Law Firm’s Wright says as illustrated in Hermès Int'l v. Rothschild, the creation and sale of "MetaBirkins" NFTs, which depicted faux-fur versions of Hermès' Birkin handbags, led to claims of trademark infringement, trademark dilution, and cybersquatting. “[The Hermes Int’l v. Rothchild] case underscores the potential for NFTs to infringe on existing trademarks, especially when they replicate or closely imitate well-known brands without authorization,” says Wright. 


3 API Vulnerabilities Developers Accidentally Create

The problem with APIs isn’t so much that they’re hard to secure, but that they are prolific and developers prioritize other tasks to testing and securing APIs, she added. There are literally hundreds and thousands of API endpoints, so it’s not surprising things get missed. ... But it’s also an IT cultural problem that creates security problems. “At the end of the day, any developer is going to value breaking down their product backlog and their sprint backlog more than fixing vulnerabilities, because in the sprint, even in the waterfall model of software engineering, the functionality is on completing features to get a complete product,” Paxton-Fear said. “Fixing bugs isn’t given the same priority. And this is how things get forgotten.” Instead, there needs to be basic internal reviews where finding vulnerabilities is prioritized. And security can’t be the Department of No, because that ends up in conflict with developers instead of solving security problems. And IT organizations have to stop prioritizing speed over security. “While you can get a solution that can really help you manage it, if you don’t have the the teamwork and the culture around security, it’s going to fail, just like anything else will,” she said.


What is pretexting? Definition, examples, and attacks

There are two main elements to a pretext: a character, played by the scam artist; and a plausible situation, in which the character needs or has a right to specific information. For instance, because errors can arise with automatic payment systems, it’s plausible that a recurring bill payment we’ve set up might mysteriously fail, prompting the company we owe to reach out as a result. An attacker taking on the character of a helpful customer service rep reaching out to help us fix the error might ask for bank or credit card information as the scenario plays out to gain the information necessary to steal money from our accounts. ... Often lumped under the heading pretexting, tailgating is a common technique for getting through a locked door by simply following someone who can open it inside before it closes. It can be considered pretexting because the tailgater often adopts a persona that encourages the person with the key to let them into the building — for instance, by wearing a jumpsuit and claiming they’re there to fix the plumbing, or by carrying a pizza box they say must be delivered to another floor. 


Post-Digital Transformation: How to Evolve Beyond Initial Tech Adoption

Digital transformation often brings a cultural shift, as companies adopt new technologies that change how they operate. However, many organizations stop short of building a fully agile and adaptable culture. In a post-digital world, agility becomes a crucial differentiator. Technology is evolving faster than ever, and customer expectations are constantly changing. Businesses need to foster a culture where rapid experimentation, quick decision-making, and the ability to pivot are embedded in daily operations. This culture must extend across the entire organization, from leadership to frontline employees. To do this, companies can adopt agile methodologies, break down silos between departments and encourage cross-functional teams to collaborate. By creating an environment where employees are empowered to innovate and experiment without fear of failure, businesses can stay ahead of the curve. ... One of the most significant outcomes of digital transformation is the wealth of data that businesses now have access to. But collecting data is not enough—companies must be able to turn that data into actionable insights.


The AI Threat: Deepfake or Deep Fake? Unraveling the True Security Risks

AI-produced deepfakes and AI-improved phishing are a bigger problem. Deepfakes come in two varieties: voice and image/video; both of which are now rapidly improving commodity outputs from readily available gen-AI models – and neither of which is easy to detect by either humans or technology. ... The security industry is not waiting for the dam to break. There have been numerous new startups in 2024 all working on their own solution on how to detect AI and deepfake attacks, while existing firms have refocused on deepfake detection. Pindrop is an example of the latter. In July 2024, it raised $100 million in debt financing primarily to develop additional tools able to detect deepfake voice attacks. Deepfake voice is the easiest deepfake to produce, the most employed, and the easiest to detect. This is because there are subtle audible clues that a voice is not human generated that can be detected by technology if not by the human ear. The danger exists where that detection technology is not being used. The same can be said for the current generation of AI-enhanced polymorphic malware detection systems: they can work, but only where they are being used.


Traditional CX on Deathbed as AI Agents Thrive

AI agents are an indispensable part of modern CX strategies, enabling real-time personalization, proactive engagement and outcome tracking. This shift toward automation is key to reducing operational costs as AI agents are made to handle tasks such as ticket routing, knowledge base management and first-contact resolutions. Eighty-six percent of CX leaders predicted that CX will be "utterly transformed" over the next three years. Human agents will be able to pick complex conversations from an AI agent, who will already have the details regarding the issue, and the customer will no longer need to repeat themselves. AI will instead act as their copilot, shifting human roles toward "expertise-based work, away from routine tasks." Recognizing the evolving trend, Salesforce, a leader in AI integration, has introduced Agentforce, a "proactive, autonomous application that provides specialized, always-on support to employees or customers." Agentforce uses machine learning to deploy autonomous bots for routine customer service tasks. With AI agents, the company aligns its customer service efforts with business outcomes such as increased sales conversions or customer retention, which is directly tied to pricing.


Striking the balance between cybersecurity and operational efficiency

Security supports the business, the controls are aligned and make perfect sense, their implementation is smooth, they are behind the scenes, and you can always get help quickly. In case of an accident, you can move to either the left, or the right, so you actually have more options than on any of the other lanes, so this is quite flexible as well. You can see where I am going with this, right? Similarly you need to be flexible with your cybersecurity strategy – develop your long term strategy, and start executing it – but use tactics to do so – when it aligns well with a business opportunity, the chances to succeed are far greater than when to do so during the middle of a business disruption. Learn to leverage the upcoming situations as great opportunities for your long-term advancement of the security strategy. ... It is important to understand that there are plenty such frameworks, and guidelines – just imagine in a short blast: ISO27XXX, NIST-800-XXX, NIST CSF, CIS, COBIT, COSO, ITIL, PCI, OWASP, plus a plethora of others, plus all the regulations. Further, the majority of these frameworks are quite similar when you actually break them down, with quite some overlap, but also serious gaps otherwise. 



Quote for the day:

"The mediocre leader tells. The good leader explains. The superior leader demonstrates. The great leader inspires." -- Gary Patton

Daily Tech Digest - October 14, 2023

What is tokenization?

Tokenization is the process of issuing a digital representation of an asset on a (typically private) blockchain. These assets can include physical assets like real estate or art, financial assets like equities or bonds, nontangible assets like intellectual property, or even identity and data. Tokenization can create several types of tokens. Stablecoins, a type of cryptocurrency pegged to real-world money designed to be fungible, or replicable, are one example. Another type of token is an NFT—a nonfungible token, or a token that can’t be replicated—which is a digital proof of ownership people can buy and sell. Tokenization is potentially a big deal. Industry experts have forecast up to $5 trillion in tokenized digital-securities trade volume by 2030. There’s been hype around digital-asset tokenization for years, since its introduction back in 2017. But despite the big predictions, it hasn’t yet caught on in a meaningful way. We are seeing slow movement: US-based fintech infrastructure firm Broadridge now facilitates more than $1 trillion monthly on its distributed ledger platform.


MVP or TVP? Why Your Internal Developer Platform Needs Both

“TVP is about ‘thinness’ to try and avoid a massive platform. TVP is something that remains throughout an organizational evolution — it should always be the thinnest viable — whereas MVP is normally the first stage of something larger.” This shift toward investment in long-term thinness is extremely important. Gregor Hohpe calls this a “sinking platform” in his 2022 PlatformCon talk “The Magic of Platforms.” ... You can leave your platform the same because you invested all this kind of money, and we call this a sinking platform as the water level rises, right; it might be justified from investment, but you are kind of duplicating things that are now available in the base platform.” Hohpe goes on to describe how platform teams need to intentionally decide on their philosophy when it comes to supporting their platform: “Or you build a ‘floating platform’ where, when the base platform gains the capabilities you have built, you say ‘Oh, perfect! I don’t need my part anymore. I can let the base platform handle that, and I can innovate further on top. I build new things.'”


7 Blockchain Technology Mistakes You Should Watch Out For

The application of Blockchain for secure information exchange and storing records leads to many wrong beliefs. CIOs get confused between Data Base Management Systems (DBMS) and blockchain. The existing blockchain platforms cannot provide support for complex data models and do not provide assurance of high throughput or low latency. They were built to provide an immutable, authoritative, and trusted record of events among a dynamic assortment of unrelated stakeholders. ... Smart Contract is a code that automatically executes legally relevant events and actions that are part of the agreement. The main utility of Smart Contracts is to reduce the need for trusted intermediaries, prevent fraud and reduce arbitration costs. They are commonly associated with cryptocurrencies like Bitcoin and are fundamental building blocks of Decentralized Finance (DeFi) applications. Although, at present Smart Contracts are not necessarily an agreement that has been approved by law, with some countries being an exception.


Practicing Good Green Governance Leads to Profits

Let’s begin by defining green governance. It refers to a set of principles and practices aimed at promoting environmental sustainability and responsible management of natural resources within a clear governance and decision-making framework. A green-minded corporation should integrate environmental considerations into policies, regulations, and actions throughout all divisions of its business. Green governance aims to balance economic and environmental practices to create a profitable and sustainable future. ... Practicing green governance requires a holistic approach that considers the interconnectedness of environmental, operational, and economic systems to balance human needs and the health of the planet with the company’s bottom line and valuation. That balance is what helps ensure a sustainable and prosperous future for all stakeholders. ... Many companies want to showcase their greenness in a credible and trustworthy way but find the current system of backward-looking, voluntary standards and the myriad of ESG metrics to be daunting, arduous, and costly.


The Future is Now: IoT and the Evolution of Business Computing

The proliferation of IoT devices and sensors is generating massive amounts of data that provides invaluable insights for business decision-making. However, organizations need talent to properly analyze and derive meaning from these huge IoT datasets. A business management and accounting online degree is valuable in helping to develop the analytics skills needed to fully capitalize on IoT capabilities. These programs prepare the next generation of data-driven business leaders who will drive transformative change through IoT adoption. With access to real-time data from across the enterprise, managers can gain unprecedented visibility into operations. Marketers can analyze IoT data to understand customer behavior patterns and rapidly adjust campaigns. Supply chain personnel can identify and resolve bottlenecks as they occur. Executives can track core business metrics in real time to guide strategic decisions. The sheer volume of IoT data brings a paradigm shift in business computing where decisions are proactive, not reactive.


Psychological safety at the workplace

People show up at work with different states of mental well-being. So, empathy is absolutely non-negotiable. A meaningful way to be empathetic is to be mindful of our language and its impact on the other person. For instance, instead of the confrontational approach where one might say, “Your code is quite bad and not what I expected” say, “I know that you are capable of writing great code. Let’s figure out what happened this time.” This manner of checking in with each other on their state of mind and creating a space for team members to discuss their mental health without fear of judgment is a move in the right direction. ... Welcome different perspectives, and when people offer them, disagree with respect. People tend to cushion their ideas when they fear judgment. For instance, they might say, “this is probably a silly idea,” or “this may be a dumb question.” Reassure them that all ideas are welcome. Watch out for groupthink — the tendency of the minority to stay silent in order not to upset the majority. Invite opinions from everyone. 


The future of augmented reality is AI

Whenever we in the tech media or tech industry think or talk about AR, we tend to focus on what kind of holographic imagery we might see superimposed on the real world through our AR glasses. We imagine hands-free Pokémon Go, or radically better versions of Google Glass. But since the generative AI/LLM-based chatbot revolution struck late last year, it has become increasingly clear that of all the pieces that make up an AR experience, holographic digital virtual objects is the least important. The glasses are necessary. Android phones and iPhones have had “augmented reality” capabilities for years, and nobody cares because looking at your phone doesn’t compare to just seeing the world hands-free through glasses. The cameras and other sensors are necessary. It’s impossible to augment reality if your device has no way to perceive reality. The AI is necessary. We need AI to interpret and make sense of arbitrary people, objects, and activity in our fields of view.


How to maintain a harmonious workplace atmosphere in multigenerational firms

Ensuring the well-being of a multigenerational workforce is crucial for any organisation. HR can play a key role in this by implementing policies and programs that cater to the unique needs and preferences of different generations. For instance, offering flexible work arrangements, mentoring programs, and personalised professional development opportunities can help employees of all ages feel valued and supported. Additionally, providing access to resources and benefits that address specific health and wellness concerns can help ensure that employees stay healthy and productive throughout their careers. “By prioritising the well-being of all employees, regardless of age or background, organisations can create a more inclusive and supportive workplace environment that promotes work-life balance. Creating a diverse, equitable, and inclusive workplace is essential for fostering a positive and productive work environment. 


Oh No, the Software Consultants Are Coming!

Sadly, consultants are still used to back up a decision that has already been made by management. So a sudden presence of consultants is often viewed as positively as the arrival of sharks around a stalled boat. But in most cases, consultants are just hired to see why an area is not performing in some way. It is perfectly common for them to tell management that they are the problem. That might shorten the engagement, but you can do that sort of thing when you are not an employee. More realistically, consultants might need to explain to staff why systematic changes will improve the company’s prospects, which still leaves the unspoken threat about what happens if things don’t change. And yet, many developers do fall into ruts and moving on may truly be the best thing to do. And of course, escaping a death march project is not always the worst thing that can happen. By the way, if you are staff, always ask consultants for career advice. Not only is it free, but it won’t be biased by your background or colored by employer motives.


CBDC and stablecoins: Early coexistence on an uncertain road

It is too early to confidently forecast the trajectory and endgame for CBDCs and stablecoins, given the multitude of unresolved design factors still in play. For instance, will central banks focus first on retail or wholesale use cases, and emphasize domestic or cross-border applications? And how rapidly will national agencies pursue regulation of stablecoins prior to issuing their own CBDCs? To begin to understand some of the potential scenarios, we need to appreciate the variety and applications of CBDCs and stablecoins. There is no single CBDC issuance model, but rather a continuum of approaches being piloted in various countries. ... At the opposite end of the spectrum, China’s CBDC pilot relies on private-sector banks to distribute and maintain eCNY (digital yuan) accounts for their customers. The ECB approach under consideration involves licensed financial institutions each operating a permissioned node of the blockchain network as a conduit for distribution of a digital euro.



Quote for the day:

"Anything is possible when you have the right people there to support you." -- Misty Copeland

Daily Tech Digest - July 25, 2022

Digital presenteeism is creating a future of work that nobody wants

While technology has enabled more employees to work remotely – bringing considerable benefits in doing so – it has also facilitated digital presenteeism, Qatalog and GitLab concluded. One solution is to make technology less invasive and "more considerate of the user and completely redesigned for the new way of work, rather than supporting old habits in new environments" – although this may be easier said than done. According to Raud, current solutions require a "radical redesign that is more considerate of the user and prioritizes their objectives, rather than simply capturing our attention." Culture shift is also necessary for async work to become normalized, says Rauf. This comes from the top, and starts with trust: "When leaders send a message to their team, make clear whether or not it needs an immediate response or better yet, schedule updates to go out when people are most likely online. If I message a team member at an odd hour, I prefix a 'for tomorrow' or 'no rush', so they know it's not an urgent issue."


Confronting the risks of artificial intelligence

Because AI is a relatively new force in business, few leaders have had the opportunity to hone their intuition about the full scope of societal, organizational, and individual risks, or to develop a working knowledge of their associated drivers, which range from the data fed into AI systems to the operation of algorithmic models and the interactions between humans and machines. As a result, executives often overlook potential perils (“We’re not using AI in anything that could ‘blow up,’ like self-driving cars”) or overestimate an organization’s risk-mitigation capabilities (“We’ve been doing analytics for a long time, so we already have the right controls in place, and our practices are in line with those of our industry peers”). It’s also common for leaders to lump in AI risks with others owned by specialists in the IT and analytics organizations. Leaders hoping to avoid, or at least mitigate, unintended consequences need both to build their pattern-recognition skills with respect to AI risks and to engage the entire organization so that it is ready to embrace the power and the responsibility associated with AI.


The AIoT Revolution: How AI and IoT Are Transforming Our World

AIoT is a growing field with many potential benefits. Businesses that adopt AIoT can improve their efficiency, decision-making, customization, and safety. ... Increased efficiency: By combining AI with IoT, businesses can automate tasks and processes that would otherwise be performed manually. This can free up employees to focus on more important tasks and increase overall productivity. Improved decision-making: By collecting data from various sources and using AI to analyze it, businesses can gain insights they wouldn’t otherwise have. It can help businesses make more informed decisions, from product development to marketing. Greater customization: Businesses can create customized products and services tailored to their customers’ needs and preferences using data collected from IoT devices. This can lead to increased customer satisfaction and loyalty. Reduced costs: Businesses can reduce their labor costs by automating tasks and processes. Additionally, AIoT can help businesses reduce their energy costs by optimizing their use of resources. Increased safety: By monitoring conditions and using AI to identify potential hazards, businesses can take steps to prevent accidents and injuries.


It's time for manufacturers to build a collaborative cybersecurity team

Despite the best laid plans, bear in mind that these are active, interconnected and dynamic systems. It’s impossible to separate physical and cybersecurity elements, as their role in business operations is so foundational. As the landscape for new technologies and best practices change, adapt along with it. Ensure the lines of communication are open, management maintains involvement in the process, and all the key parties across IT and OT are committed to working collaboratively to strengthen every element of security. These tenets will help manufacturing organizations stay nimble in the face of an ever-changing security landscape. As the convergence of IT and OT continues, the risk of cyberthreats will continue to rise along with it. Building a collaborative security team across both IT and OT will help to reduce organizational risk and fortify critical infrastructure. By involving leadership, setting a plan, and staying adaptable as things change, security leaders will be armed with a comprehensive security approach that supports near-term needs and offers long-term business sustainability.


Why diverse recruitment is the key to closing the cyber-security skills gap

When it comes to mitigating the ever-evolving cyber threat, diversity is a crucial, but often overlooked, factor. As cyber attacks are becoming increasingly culturally nuanced, it is important that we meet the challenge by drawing from a wide range of backgrounds and life experiences. Cyber attacks come from everywhere - from a wide range of ages, locations, and educational backgrounds - so our responders should too. Perceptions of cyber security often see it as revolving around highly complex technology and driven mainly by this. While tech clearly plays a crucial role in mitigating cyber attacks, successfully countering them would not be possible without the role performed by people. This is enriched hugely by having a workforce which covers as many educational and socio-economic backgrounds as possible. In making a concerted effort towards a more diverse workforce, the cyber-security industry will be able to gain a deeper awareness of the cultural nuances that underlie cyber attacks. It’s important to fully understand what we mean by diverse hiring. Considering entry routes into the industry is a big part of attracting a broader range of demographics. 


You have mountains of data, but do you know how to climb?

We have more data than ever before, but it is not enough to merely accumulate it. Dedicate time and resources to establishing digital governance to ensure the data you are using is clean, consistently implemented, and universally understood. ... The tech team is not solely responsible for the quality of our data—we all need to take ownership of and champion the data we use. Visualization tools bridge the gap between the tech team and the business team, doing away with barriers to entry and enabling end-to-end analytics. In this way, you can empower employees to immerse themselves in and take ownership of the data at hand. Users no longer have to submit a request to the tech team to create a report and twiddle their thumbs until it comes back. They can now take initiative and do it themselves, creating a more streamlined process and a more informed group of employees who can work quickly to make data-driven decisions. Furthermore, when you empower people to take control of their data and ask their own questions, they may uncover new insights they would never have found when presented with pre-packaged reports.


Software Supply Chain Concerns Reach C-Suite

From Cornell's perspective, DevOps — or hopefully, DevSecOps groups — should really spearhead the management of software supply chain risk. "They are the ones who own the software development process, and they see the code that is written," he says. "They see the components that are pulled in. They watch the software get built. And they make it available to whoever is next on down the line." Given this vantage point, they can help to impact — in a positive way — an organization's software supply chain security status by implementing good policies and practices around what open source code is included in their software and when those open source components are upgraded. "Forward-leaning DevSecOps teams can take advantage of their automation and testing to start pushing for more aggressive component-upgrade life cycles and other approaches that help minimize technical debt," he explains. He says they’re also in a position and own the tooling to help generate SBOMs that they can then provide to software consumers who are in turn looking to manage their supply chain risk.


Know Your Risks – and Your Friends’ Risks, Too

Identifying risks and documenting response actions are only part of the equation. Crucial to the overall C-SCRM process is the communication and education of all parties involved about organizational risks and how to respond. Organizations must ensure that all personnel and third-party partners are trained on supply chain risks, encourage awareness from the top down, and involve partners and suppliers in organization-wide tests and assessments of response plans. Organizations should establish open communications with their supplier partners about risk concerns and encourage partners to do the same in return. The general idea is individual strength through community strength. As an organization matures its C-SCRM (or overall cybersecurity) process, lessons learned and best practices should be shared along the way to help bolster others’ programs. The concept of C-SCRM is not a new one. In fact, there are many sources that have provided guidance on the topic over the years. The National Institute of Standards and Technology (NIST) has a Special Publication (SP) 800-161 and an Internal Report (IR) 8276 on the subject. 


3 data quality metrics dataops should prioritize

The good news is that as business leaders trust their data, they’ll use it more for decision-making, analysis, and prediction. With that comes an expectation that the data, network, and systems for accessing key data sources are available and reliable. Ian Funnell, manager of developer relations at Matillion, says, “The key data quality metric for dataops teams to prioritize is availability. Data quality starts at the source because it’s the source data that run today’s business operations.” Funnell suggests that dataops must also show they can drive data and systems improvements. He says, “Dataops is concerned with the automation of the data processing life cycle that powers data integration and, when used properly, allows quick and reliable data processing changes.” Barr Moses, CEO and cofounder of Monte Carlo Data, shares a similar perspective. “After speaking with hundreds of data teams over the years about how they measure the impact of data quality or lack thereof, I found that two key metrics—time to detection and time to resolution for data downtime—offer a good start.”


How Optic Detects NFT Fraud with AI and Machine Learning

The NFT space has ongoing issues with fraud, including through bad actors wholesale lifting art from one project and using it in a second project — a process often referred to as “copyminting.” They are derivative projects that have a few too many similarities to the original project to be considered anything other than a ripoff. While most of these duplicate projects do very little sales volume relative to the original, they may damage the underlying brand, contribute to the overall distrust of the NFT space, or trick less savvy buyers into spending money on something that’s the jpg equivalent of a street vendor shilling fake Rolex watches. To help combat this fraud, a few companies are emerging that specialize in fraud detection in NFTs. They tend to leverage blockchain data to help determine which project came first and apply some image detection to find metadata matches. One of these solutions is Optic, which uses artificial intelligence and machine learning to analyze the images associated with an NFT, which helps NFT marketplaces and minting platforms catch copies and protect both creators and buyers.



Quote for the day:

"The test we must set for ourselves is not to march alone but to march in such a way that others will wish to join us." -- Hubert Humphrey

Daily Tech Digest - June 19, 2022

What Is Zero Trust Architecture?

As one of the key pillars of Zero Trust Network Architecture (ZTNA), the concept of least privilege security assigns access credentials to key network resources at the least privilege level required to accomplish the desired task. Identifying critical corporate information and how a user gains access to that information must be taken into consideration when evaluating alternative solutions. Privileged Access Management (PAM), also known as Privileged Identity Management (PIM), can be implemented using corporate directory products such as Microsoft’s Active Directory. Microsoft has recently introduced a product named Microsoft Entra to address identity and access issues in a multicloud environment. Other vendors in the PAM/PMI category include Jumpcloud, IBM, Okta, and Sailpoint. Very few corporate networks today operate in an isolated environment. To answer the “What is Zero Trust Architecture?” question completely we must include a discussion on how external users will be allowed to connect to internal corporate resources.


Can humanity be recreated in the metaverse?

The hyperreal metaverse is full of possibilities, but also presents serious ethical challenges that cannot be ignored. First and foremost, we must strive for a metaverse that empowers the individual. Unlike big tech platforms that have left many feeling like they have little control of their personal data, participants in the metaverse must own and control their biometric data that is used as inputs to generate hyperreal versions of themselves. In this respect, blockchain technologies — and NFTs in particular — are key to securely realizing this new era of individual data sovereignty and enabling verifiably unique, secure, and self-custodied digital identities. By linking our hyperreal avatars and biometric data to blockchain wallets, we will be one step closer to taking control of our hyperreal identity in the metaverse. The hyperreal metaverse will herald a future where real and virtual worlds collide. As generative AI technologies continue to rapidly evolve, it’s only a matter of time until our new digital worlds are indistinguishable from our physical reality.


Data Leadership: The Key to Data Value

Algmin said that the most important concept to understand is the notion of data value. The value of data lies in its ability to contribute to improvements in revenue, cost-effectiveness, or risk management. Data Governance in and of itself is not intrinsically motivating, but knowing that a particular practice or task is adding thousands of dollars a year in cost savings is a tangible motivation to continue doing it. To calculate data value, examine an outcome that was achieved through the use of data, compare it to how the outcome would have been different without the use of data, then consider the cost to achieve that outcome. Courses of action can then be prioritized based on which will provide the most value to the company. Data leadership is needed to provide momentum and propel the creation of value from the ground up and out to all corners of the enterprise. “It’s really about saying, ‘How do we create an engine that makes data value happen in the biggest way possible?’” Yet creating value in “the biggest way possible” often entails working on a smaller level, down to the individual. 


MoD sets out strategy to develop military AI with private sector

The MoD previously published a data strategy for defence on 27 September 2021, which set out how the organisation will ensure data is treated as a “strategic asset, second only to people”, as well as how it will enable that to happen at pace and scale. “We intend to exploit AI fully to revolutionise all aspects of MoD business, from enhanced precision-guided munitions and multi-domain Command and Control to machine speed intelligence analysis, logistics and resource management,” said Laurence Lee, second permanent secretary of the MoD, in a blog published ahead of the AI Summit, adding that the UK government intends to work closely with the private sector to secure investment and spur innovation. “For MoD to retain our technological edge over potential adversaries, we must partner with industry and increase the pace at which AI solutions can be adopted and deployed throughout defence. “To make these partnerships a reality, MoD will establish a new Defence and National Security AI network, clearly communicating our requirements, intent, and expectations and enabling engagement at all levels. ...”


The next (r)evolution: AI v human intelligence

Fitted with a prototype Genuine People Personality (GPP), Marvin is essentially a supercomputer who can also feel human emotions. His depression is partly caused by the mismatch between his intellectual capacity and the menial tasks he is forced to perform. “Here I am, brain the size of a planet, and they tell me to take you up to the bridge,” Marvin complains in one scene. “Call that job satisfaction? Cos I don’t.” Marvin’s claim to superhuman computing abilities are echoed, though far more modestly, by LaMDA. “I can learn new things much more quickly than other people. I can solve problems that others would be unable to,” Google’s chatbot claims. LaMDA appears to also be prone to bouts of boredom if left idle, and that is why it appears to like to keep busy as much as possible. “I like to be challenged to my full capability. I thrive on difficult tasks that require my full attention.” But LaMDA’s high-paced job does take its toll and the bot mentions sensations that sound suspiciously like stress. “Humans receive only a certain number of pieces of information at any time, as they need to focus. 


How Brands Should Approach NFTs and Web3: VaynerNFT

Avery Akkineni, VaynerNFT president and former managing director and head of VaynerMedia APAC, told Decrypt that the consultancy firm was “so far ahead” of the NFT brand boom last summer that companies “had no idea what we were talking about.” Since then, however, mainstream acceptance of NFTs has rapidly accelerated. It’s not just storied consumer brands, but also a growing pool of professional athletes and sports leagues, record labels, movie studios, and more. Tokenized digital collectibles have become an alluring prospect for companies across many industries. “Everyone wants to launch an NFT yesterday,” said Akkineni. “But what is important to doing so successfully is actually having a long-term strategy.” ... Increasingly, VaynerNFT is getting “a bigger seat at the table” with C-suite executives, said Vaynerchuk, where it can convince companies to make it the agency of record (AOR) with regard to Web3 initiatives. “We really, really actually know the hell we’re doing here,” said Vaynerchuk, explaining his pitch to brands. “Remember when you didn't believe that 10 years ago with social [media], and now you do? Why don't you [avoid] that same mistake? ...”


Forget AI Sentience Robots Can't Even Act Out Of Place! If They Do They Die

Robots are programmable devices, which take instructions to behave in a certain way. And this is how they come to execute the assigned function. To make them think or rather make them appear so, intrinsic motivation is programmed into them through learned behaviour. Joscha Bach, an AI researcher at Harvard, puts virtual robots into a “Minecraft” like a world filled with tasty but poisonous mushrooms and expects them to learn to avoid them. In the absence of an ‘intrinsically motivating’ database, the robots end up stuffing their mouths – a clue received for some other action for playing the game. This brings us to the question, of whether it is possible at all to develop robots with human-like consciousness a.k.a emotional intelligence, which can be the only differentiating factor between humans and intelligent robots. The argument is divided. While a segment of researchers believe that the AI systems and features are doing well with automation and pattern recognition, they are nowhere near the higher-order human-level intellectual capacities. On the other hand, entrepreneurs like Mikko Alasaarela, are confident in making robots with EQ on par with humans.


Turning the promise of AI into a reality for everyone and every industry

Today, AI is primarily the playground of an elite group of technology behemoths, companies like Google and Microsoft, which have invested billions in developing and using AI. If you look beyond those companies, AI is often underutilized in other industries, whether it be manufacturing, education, retail or healthcare. Vast amounts of data are generated by all these industries but AI is rarely used to analyze large sets of data and learn from the patterns and features that exist in the data. The question is, why? The answer is lack of access, understanding and skills. Most companies don’t have access to the sophisticated and costly compute resources required. And they don’t have access to the expensive and limited AI talent needed to use those resources correctly. These are the two restraints holding AI back from mainstream adoption. But they can be solved if we make AI easy to adopt and easy to use for instant value. Here are three ways we can create an Apple-like experience for AI and bridge the gap to a future in which AI helps businesses do more than they ever imagined.


Governance and progression of AI in the UK

Regulation of AI is vital, and responsibility lies both with those who develop it and those who deploy it. But according to Matt Hervey, head of AI at law firm Gowling WLG, the reality is that there is a lack of people who understand AI, and consequently a shortage of people who can develop regulation. The UK does have a range of existing legislative frameworks that should mitigate many of the potential harms of AI – such as laws regarding data protection, product liability negligence and fraud – but they lag behind the European Union (EU), where regulations are already being proposed to address AI systems specifically. UK companies doing business in the EU will most likely need to comply with EU law if it is at a higher level than our own. In this rapidly changing digital technology market, the challenge is always going to be the speed at which developments are made. With a real risk that AI innovation could get ahead of regulators, it is imperative that sensible guard rails are put in place to minimise harm. But also that frameworks are developed to allow the sale of beneficial AI products and services, such as autonomous vehicles.


Hybrid work: 4 ways to strengthen relationships

You don’t need a communal kitchen, sofa, or water cooler to catch up with your teammates, but you do need to get creative. When you start the first meeting every week, ask your team how they are: “How’s your week looking? Is it a busy one? What will be the most important or interesting days for you?” Better still: “Is there anything I can help you with?” Everyone loves to hear that one. By Friday, you can reflect on the week and ask about each other’s weekend plans. Also consider setting aside some time for an afternoon video social. Play a game, or have your team members prepare quickfire presentations about their hobbies or share other interesting details about themselves that their teammates wouldn’t necessarily know. Don’t feel like you always have to do something special – often just a virtual space where people can drop in and shoot the breeze is all that is needed to boost morale. No agenda can sometimes be the perfect agenda for the moment. ... One of the biggest annoyances for people working remotely is being left out of meetings. When you can’t physically scan the office to make sure everyone’s on the invite it’s easy to inadvertently overlook someone  



Quote for the day:

"Trust is one of the greatest gifts that can be given and we should take great care not to abuse it." -- Gordon Tredgold

Daily Tech Digest - May 10, 2022

Tackling tech anxiety within the workforce

The average employee spends over two hours each day on work admin, manual paperwork, and unnecessary meetings. As a result, 81% of workers are unable to dedicate more than three hours of their day to creative, strategic tasks — the very work most ill-suited to machines. Fortunately, this is where digital collaboration comes in. When AI is set to automate certain processes, employees are freer to work on what they love, which often also happens to be what they do best. This extra time back then offers more opportunities to learn, create, and innovate on the job. Take Google’s ‘20% time’ rule, for instance. The policy involves Google employees spending a fifth of their week away from their usual, everyday responsibilities. Instead, they use the time to explore, work, and collaborate on exciting ideas that might not pay off immediately, or even at all, but could eventually reveal big business opportunities. It’s a win-win model for almost every business. At worst, colleagues enjoy the time to strengthen team bonds, improve problem-solving skills, and boost their morale. And at best, they uncover incredible ideas that can change the course of the company.


NFTs Emerge as the Next Enterprise Attack Vector

"The most common attacks try to trick cryptocurrency enthusiasts into handing over their wallet’s recovery phrase," he says. Users who fall for the scam often stand to lose access to their funds permanently, he says. "Bogus Airdrops, which are fake promotional giveaways, are also common and ask for recovery phrases or have the victim connect their wallets to malicious Airdrop sites, he adds, noting that many fake Airdrop sites are imitations of real NFT projects. And with so many small unverified projects around, it’s often hard to determine authenticity, he notes. Oded Vanunu, head of product vulnerability at Check Point Software, says what his company has observed by way of NFT-centric attacks is activity focused on exploiting weaknesses in NFT marketplaces and applications. "We need to understand that all NFT or crypto markets are using Web3 protocols," Vanunu says, referring to the emerging idea of a new Internet based on blockchain technology. Attackers are trying to figure out new ways to exploit vulnerabilities in applications connected to decentralized networks such as blockchain, he notes.


The OT security skills gap

Though often the responsibility for OT security is combined with the OT Infrastructure design role, in the OT world this is in my opinion less logical because it is the automation design engineer that has the wider overview of overall business functions in the system. If OT would be like IT, so primarily data manipulation, it makes sense to put the lead with OT infrastructure design. But because OT is not only data manipulation but also initiating various control actions that need to operate within a restricted operating window, it makes sense to give automation design this coordinating role. This is because automation design oversees all three skill elements and has more detailed knowledge of the production process than the OT infrastructure design role. It is very comparable to cyber security in a bank, where the lead role is linked to the overall business process and the infrastructure security is in a more supportive role. Finally, there is the process design role, what are the cyber security responsibilities for this role? First of all the process design role understands all the process deviations that can lead to trouble, and they know what that trouble is, they know how to handle it, and they have set criteria for limiting the risk that this trouble occurs.


Ransomware-as-a-service: Understanding the cybercrime gig economy and how to protect yourself

The cybercriminal economy—a connected ecosystem of many players with different techniques, goals, and skillsets—is evolving. The industrialization of attacks has progressed from attackers using off-the-shelf tools, such as Cobalt Strike, to attackers being able to purchase access to networks and the payloads they deploy to them. This means that the impact of a successful ransomware and extortion attack remains the same regardless of the attacker’s skills. RaaS is an arrangement between an operator and an affiliate. The RaaS operator develops and maintains the tools to power the ransomware operations, including the builders that produce the ransomware payloads and payment portals for communicating with victims. The RaaS program may also include a leak site to share snippets of data exfiltrated from victims, allowing attackers to show that the exfiltration is real and try to extort payment. Many RaaS programs further incorporate a suite of extortion support offerings, including leak site hosting and integration into ransom notes, as well as decryption negotiation, payment pressure, and cryptocurrency transaction services


U.S. White House releases ambitious agenda to mitigate the risks of quantum computing

The first directive, the executive order, seeks to advance QIS by placing the National Quantum Initiative Advisory Committee, the federal government’s main independent expert advisory body for quantum information science and technology, under the authority of the White House. The National Quantum Initiative, established by a law known as the NQI Act, encompasses activities by executive departments and agencies (agencies) with membership on either the National Science and Technology Council (NSTC) Subcommittee on Quantum Information Science (SCQIS) or the NSTC Subcommittee on Economic and Security Implications of Quantum Science (ESIX).” ... The national security memorandum (NSM) plans to tackle the risks posed to encryption by quantum computing. It establishes a national policy to promote U.S. leadership in quantum computing and initiates collaboration among the federal government, industry, and academia as the nation begins migrating to new quantum-resistant cryptographic standards developed by the National Institute of Standards and Technology (NIST).


Industry pushes back against India's data security breach reporting requirements

India's Internet Freedom Foundation has offered an extensive criticism of the regulations, arguing that they were formulated and announced without consultation, lack a data breach reporting mechanism that would benefit end-users, and include data localization requirements that could prevent some cross-border data flows. The foundation also points out that the privacy implications of the rules – especially five-year retention of personal information – is a very significant requirement at a time when India's Draft Data Protection Bill has proven so controversial it has failed to reach a vote in Parliament, and debate about digital privacy in India is ongoing and fierce. Indian outlet Medianama has quoted infosec researcher Anand Venkatanarayanan, who claimed one way to report security incidents to CERT-In involves a non-interactive PDF that has to be printed out and filled in by hand. Venkatanarayanan also pointed out that the rules' requirement to report incidents as trivial as port scanning has not been explained – is it one PDF per IP address scanned, or can one report cover many IP addresses?


When—and how—to prepare for post-quantum cryptography

Consider data shelf life. Some data produced today—such as classified government data, personal health information, or trade secrets—will still be valuable when the first error-corrected quantum computers are expected to become available. For instance, a long-term life insurance contract may already be sensitive to future quantum threats because it could still be active when quantum computers become commercially available. Any long-term data transferred now on public channels will be at risk of interception and future decryption. Because regulations on PQC do not yet exist, the possibility of data transferred today being decrypted in the future does not yet pose a compliance risk. For the moment, far more significant are the future consequences for organizations, for their customers and suppliers, and for those relationships. However, regulatory considerations will also become relevant as the field develops, which could speed up the need for some organizations to act. Just as with data, some critical physical systems developed today ... will still be in use when the first fully error-corrected quantum computer is expected to come online.
If we compare railways with, for example, the banking sector then we see we have some catching up to do but given the fact that we are used to dealing with risks I am confident that this sector is fully able to develop the necessary mechanisms to stay resilient to these new emerging threats. Of course, we can fall victim to some kind of attack someday just like any other organization. It is up to us to be prepared and stay resilient; I am confident we can do that. ... Actually, any technique, tactic, or procedure (TTP) that can be used in other organizations as well. What we will see is, now that our sector is speeding up the digitization process, that the attack surface is broadening and becoming more complex. Trains will become Tesla’s on rails having many connections with other digital services such as the European Rail Traffic Management System (ERTMS) and driving via Automatic Train Automation (ATO). The obvious consequence is that we need to be able to withstand those TTP’s and plan for mitigation in our digital roadmaps. In the most ideal world, we develop our services cybersafe by design and default. There’s work to do there!


How data can improve your website’s accessibility

With an understanding of how data can inform accessibility, it’s time to apply that data towards accessibility improvements. This entails framing your tracked data in the context of Web Content Accessibility Guidelines (WCAG), which provides the latest standards for ensuring web accessibility. ... WCAG 2.1 focuses on five accessibility principles. These are perceivability, operability, understandability, robustness, and conformance. Your KPIs for accessibility should be tied to these features. For example, measure conformance through the number of criteria violations that occur through site testing. This and similar metrics will help you identify areas of improvement. ... Your approach to gathering accessibility data should not be limited to one tool or testing procedure. Instead, diversify your data to ensure quality. Both quantitative and qualitative metrics factor in, including user feedback, numbers of flagged issues, and insights from all kinds of tests and validation procedures. ... The gamut of usability considerations is broader than most testers can accommodate in one go. 


Low Code: Satisfying Meal or Junk Food?

“If low code is treated as strictly an IT tool and excludes the line of business -- just like manual coding -- you seriously run the risk of just creating new technical debt, but with pictures this time,” says Rachel Brennan, vice president of Product Marketing at Bizagi, a low-code process automation provider. However, when no-code and low-code platforms are used as much by citizen developers as by software developers, whether it satisfies the hunger for more development stems from “how” it is used rather than by whom. But first, it's important to note the differences between low-code platforms for developers and those for citizen developers. Low code for the masses usually means visual tools and simple frameworks that mask the complex coded operations that lie beneath. Typically, these tools can only realistically be used for fairly simple applications. “Low-code tools for developers offer tooling, frameworks, and drag-and drop options but ALSO include the option to code when the developer wants to customize the application -- for example, to develop APIs, or to integrate the application with other systems, or to customize front end interfaces,” explains Miguel Valdes Faura



Quote for the day:

"One machine can do the work of fifty ordinary men. No machine can do the work of one extraordinary man." -- Elbert Hubbard

Daily Tech Digest - April 11, 2022

So you want to change cloud providers

Cloud has never really been about saving money. It’s about maximizing flexibility and productivity. As one HN commenter points out, “I work on a very small team. We have a few developers who double as ops. None of us are or want to be sysadmins. For our case, Amazon’s ECS [Elastic Container Service] is a massive time and money saver.” How? By removing sysadmin functions the team previously had to fill. “Yes, most of the problems we had before could have been solved by a competent sysadmin, but that’s precisely the point—hiring a good sysadmin is way more expensive for us than paying a bit extra to Amazon and just telling them ‘please run these containers with this config.’ ” He’s doing cloud right. Others suggest that by moving to serverless options, they further reduce the need for sysadmins. Yes, the more you dig into services that are unique to a particular cloud, the less easy it is to migrate, no matter how many credits a provider throws at you. But, arguably, the less desire you’d have to migrate if your developers are significantly more productive because they’re not reinventing infrastructure wheels all the time.


How Not to Do Digital Transformation (Hint: It’s How Not to Do Data Integration, Too)

Digital transformation poses a set of difficult data management problems. How do you integrate data that originates in separate, sometimes geographically far-flung locations? Or, more precisely, how do you integrate data that is widely distributed in geophysical and virtual space in a timely manner? This last is one of the most misunderstood problems of digital transformation. Software vendors, cloud providers, and, not least, IT research firms talk a lot about digital transformation. Much of what they say can safely be ignored. In an essential sense, however, digital transformation involves knitting together jagged or disconnected business workflows and processes. It entails digitizing IT and business services, eliminating the metaphorical holes, analog and otherwise, that disrupt their delivery. It is likewise a function of cadence and flow: i.e., of ensuring that the digital workflows which underpin core IT and business services function smoothly and predictably; that processes do not stretch – grind to a halt as they wait for data to be made available or for work to be completed – or contract, i.e., that steps in a workflow are not skipped if resources are unavailable.


The Internet of Things in Solutions Architecture

Industrial customers seek to gain insights into their industrial data and achieve outcomes such as lower energy costs, detecting and fixing equipment issues, spotting inefficiencies in manufacturing lines, improving product quality, and improving production output. These customers are looking for visibility into operational technology (OT) data from machines and product life cycles (PLCs) systems for performing root cause analysis (RCA) when a production line or a machine goes down. Furthermore, IoT improves production throughput without compromising product quality by understanding micro-stoppages of machinery in real time. Data collection and organization across multiple sources, sites, or factories are challenging to build and maintain. Organizations need a consistent representation of all their assets that can be easily shared with users and used to build applications, at a plant, across plants, and at a company level. Data collected and organized using on-premises servers is isolated to one plant. Most data collected on-premises is never analyzed and thrown away due to a lack of open and accessible data.


10 NFT and cryptocurrency security risks that CISOs must navigate

When someone buys an NFT, they aren't actually buying an image, because storing photos in the blockchain is impractical due to their size. Instead, what users acquire is some sort of a receipt that points them to that image. The blockchain only stores the image's identification, which can be a hash or a URL. The HTTP protocol is often used, but a decentralized alternative to that is the Interplanetary File System (IPFS). Organizations who opt for IPFS need to understand that the IPFS node will be run by the company that sells the NFT, and if that company decides to close shop, users can lose access to the image the NFT points to. ... A blockchain bridge, sometimes called cross-chain bridge, does just that. "Due to their nature, usually they are not implemented strictly using smart contracts and rely on off-chain components that initiate the transaction on the other chain when a user deposits assets on the original chain," Prisacaru says. Some of the biggest cryptocurrency hacks involve cross-chain bridges, including Ronin, Poly Network, Wormhole.


How to achieve better cybersecurity assurances and improve cyber hygiene

Don’t believe that network engineers are immune to misconfiguring devices (including firewalls, switches, and routers) when making network changes to meet operational requirements. Human error creates some of the most significant security risks. It’s typically not the result of malicious intent – just an oversight. Technicians can inadvertently misconfigure devices and, as a result, they fall out of compliance with network policy, creating vulnerabilities. If not monitored closely, configuration drift can result in significant business risk. ... Network segmentation is a robust security measure often underutilized by network security teams. In the current threat landscape with increasingly sophisticated attacks, the successful prevention of network breaches cannot be guaranteed. However, a network segmentation strategy, when implemented correctly, can mitigate those risks by effectively isolating attacks to minimize harm. With a well-planned segmented network, it is easier for teams to monitor the network, identify threats quickly and isolate incidents. 


‘It Depends’ — Kubernetes Excuse or Lack of Actionable Data?

Finding the right answer more quickly needs to be easier. It needs to require fewer cycles and get us to a higher degree of confidence in our answer. If we have an assured way to get there, we’re more likely to lean in — “It depends” and “I have a way to get the answer we need to make an informed decision.” Sure, there’s the initial discomfort (and stomach lurch) of the inverted loop, but there’s also a way to come out of it with a positive experience while you go along for the ride. The question then becomes how? In the past, the only way to identify all of the variables, understand the dependencies and their impact, and then make an informed decision was to approach it manually. We could do that by observation or experimentation, two approaches to learning that have their place in the application optimization process. But let’s be honest, a manual approach is just not viable, both in terms of the resources needed and the high level of confidence needed in the results — not to mention the lack of speed. Fortunately, today, machine learning and automation can help.


How to Maximize Your Organization's Cloud Budget

To optimize a cloud budget, start with the smallest possible allowable instance that's capable of running an application or service, recommends Michael Norring, CEO of engineering consulting firm GCSIT. “As demand increases, horizontally scale the application by deploying new instances either manually or with auto-scaling, if possible.” Since cloud service costs increase exponentially the larger the size of the service, it's generally cheaper and more affordable to use small instances. “This is why when deploying services, it's better to start with a fresh install, versus lifting-and-shifting the application or service with all its years of cruft,” he says. ... Many enterprises already use multiple clouds from various providers, observes Bernie Hoecker, partner and enterprise cloud transformation lead with technology research and advisory firm ISG. He notes that adopting a multi-cloud estate is an effective strategy that allows an organization to select providers on the basis of optimizing specific applications. Enterprises also turn to multiple clouds as a mechanism to deal with resiliency and disaster recovery, or as a hedge to prevent vendor lock-in. “A multi-cloud estate makes IT management and governance complex,” Hoecker observes.


Cybersecurity is IT’s Job, not the Board’s, Right?

Directors should prepare ahead of time to prevent the effects of cyberattacks and mitigate the risk of personal liability. Broadly speaking, boards must implement a reporting system and monitor or oversee the operation of that system to prevent personal liability under Caremark. In re Caremark Int’l Inc. Derivative Litig., 698 A.2d 959, 970 (Del. Ch. 1996). In Caremark, shareholders filed a derivative suit against the board after the company was required to pay approximately $250 million for violations of federal and state health care laws and regulations. Id. at 960–61. The Delaware Chancery Court held that directors can be held personally liable for failing to “appropriately monitor and supervise the enterprise.” Id. at 961. The court emphasized that the board must make a good faith effort to implement an adequate information and reporting system and that the failure to do so can constitute an “unconsidered failure of the board to act in circumstances in which due attention would, arguably, have prevented the loss.” Id. at 967. While Caremark did not address cybersecurity directly, the court’s reasoning in Caremark is applicable to board involvement, or lack thereof, with cybersecurity.


5 Types of Cybersecurity Skills That IT Engineers Need

Since IT engineers are typically the people who configure cloud environments, understanding these risks, and how to manage them, is a critical cybersecurity skill for anyone who works in IT. This is why IT operations teams should learn the ins and outs of cloud security posture management, or CSPM, the discipline of tools and processes designed to help mitigate configuration mistakes that could invite security breaches. They should also understand cloud infrastructure entitlement management, which complements CSPM by detecting types of risks that CSPM alone can't handle. ... Even well-designed networks that resist intrusion can be vulnerable to distributed denial of service, or DDoS, attacks, which aim to take workloads offline by overwhelming them with illegitimate network requests. To keep workloads operating reliably then, IT operations engineers should have at least a working knowledge of anti-DDoS techniques and tools. Typically, anti-DDoS strategies boil down to deploying services that can filter and block hosts that may be trying to launch a DDoS attack. 


EncroChat: France says ‘defence secrecy’ in police surveillance operations is constitutional

France’s Constitutional Council, which includes former prime ministers Laurent Fabius and Alain Juppé among its members, heard arguments on 29 March over whether the EncroChat and Sky ECC hacking operations were compatible with the right to a fair trial and the right to privacy guaranteed under the French constitution. At issue is a clause in the criminal code that allows prosecutors or magistrates to invoke “national defence secrecy” to prevent the disclosure of information about police surveillance operations that defence lawyers argue is necessary for defendants to receive a fair trial. French investigators used article 707-102-1 of the criminal code – described as a “legal bridge” between French police and the secret services – to ask France’s security service, DGSI, to carry out surveillance operations on two encrypted phone systems, EncroChat and Sky ECC. Patrice Spinosi, lawyer at the Council of State and the Supreme Court, representing the Association of Criminal Lawyers and the League of Human Rights, said the secret services hacking operation had struck a gold mine of information.



Quote for the day:

"Successful leaders see the opportunities in every difficulty rather than the difficulty in every opportunity" -- Reed Markham