Showing posts with label bitcoin. Show all posts
Showing posts with label bitcoin. Show all posts

Daily Tech Digest - June 12, 2025


Quote for the day:

"It takes a lot of courage to show your dreams to someone else." -- Erma Bombeck


Tech Burnout: CIOs Might Be Making It Worse

“CIOs often unintentionally worsen burnout by underestimating the human toll of constant context switching, unclear priorities, and always-on availability. In the rush to stay competitive with AI-driven initiatives, teams are pushed to deliver faster without enough buffer for testing, reflection, or recovery,” Marceles adds. In the end, it’s the panic surrounding AI adoption, and not the technology itself, that’s accelerating burnout. The panic is running hot and high, surpassing anything CIOs and IT members think of as normal. “The pressure to adopt AI everywhere is real, and CIOs are feeling it from every angle -- executives, investors, competitors. But when that pressure gets passed down as back-to-back initiatives with no breathing room, it fractures the team. Engineers get pulled into AI pilots without proper training. IT staff are asked to maintain legacy systems while onboarding new automation tools. And all of it happens under the expectation that this is just “the new normal,” says Cahyo Subroto, founder of MrScraper, a data scraping tool. ... “What gets lost is the human capacity behind the tech. We don’t talk enough about how context-switching and unclear priorities drain cognitive energy. When everything is labeled critical, people lose the ability to focus. Productivity drops. Morale sinks. And burnout sets in quietly, until key people start leaving,” Subroto says.


Asset sprawl, siloed data and CloudQuery’s search for unified cloud governance

“The biggest challenge with existing tools is that they’re siloed — one for security, one for cost, one for asset inventory — making it hard to get a unified view across domains,” CQ founder Yevgeny Pats told VentureBeat. “Even simple questions like ‘What EBS volume is attached to an EC2 that is turned off? are hard to answer without stitching together multiple tools.” ... Taking a developer-first approach is critical, said Pats, because developers are ultimately the ones building, operating and securing today’s cloud infrastructure. Still, many cloud visibility tools were built for top-down governance, not for the people actually in the trenches. “When you put developers first, with accessible data, flexible APIs and native language like SQL, you empower them to move faster, catch issues earlier and build more securely,” he said. Customers are finding ways to use CloudQuery beyond asset inventory. ... “Having a fully serverless solution was an important requirement,” Hexagon cloud governance and FinOps expert Peter Figueiredo and CloudQuery director of engineering Herman Schaaf wrote in a blog post. “This decision brought lots of benefits since there is no need for time-consuming updates and virtually zero maintenance.”


Digital twins combine with AI to help manage complex systems

And it’s not just AI making digital twins better. The digital twins can also make for better AI. “We’re using digital twins to actually generate information for large language models,” says PwC’s Likens, adding that the synthetic data is of better quality when it comes from a digital twin. “We see opportunity to have the digital twins generate the missing pieces of data we need, and it’s more in line with the environment because it’s based on actual data.” A digital twin is a working model of a system, says Gareth Smith, GM of software test automation at Keysight Technologies, an electronics company. “It’ll respond in a way that mimics the expected response of the physical system.” ... Another potential use case for digital twins that might become more relevant this year is to help with understanding and scaling agentic AI systems. Agentic AI allows companies to automate complex business processes, such as solving customer problems, creating proposals, or designing, building, and testing software. The agentic AI system can be composed of multiple data sources, tools, and AI agents, all interacting in non-deterministic ways. That can be extremely powerful, but extremely dangerous. So a digital twin can monitor the behavior of an agentic system to ensure it doesn’t go off the rails, and test and simulate how the system will react to novel situations.


Will Quantum Computing Kill Bitcoin?

If a technological advance were to render these assets insecure, the consequences could be severe. Cryptocurrencies function by ensuring that only authorized parties can modify the blockchain ledger. In Bitcoin’s case, this means that only someone with the correct private key can spend a given amount of Bitcoin. ... Quantum computers, however, operate on different principles. Thanks to phenomena like superposition and entanglement, they can perform many calculations in parallel. In 1994, mathematician Peter Shor developed a quantum algorithm capable of factoring large numbers exponentially faster than classical methods. ... Could quantum computing kill Bitcoin? In theory, yes, if Bitcoin failed to adapt and quantum computers suddenly became powerful enough to break its encryption, its value would plummet. But this scenario assumes crypto stands still while quantum computing advances, which is highly unlikely. The cryptographic community is already preparing, and the financial incentives to preserve the integrity of Bitcoin are enormous. Moreover, if quantum computers become capable of breaking current encryption methods, the consequences would extend far beyond Bitcoin. Secure communications, financial transactions, digital identities, and national security all depend on encryption. In such a world, the collapse of Bitcoin would be just one of many crises.


Smaller organizations nearing cybersecurity breaking point

Small and medium enteprises (SMEs) that do have budget to hire specialists often struggle to attract and retain skilled professionals due to the lack of variation in the role. Burnout is also a growing issue for the understaffed, underqualified IT teams common in small business. “With limited resource in the business, employees are often wearing multiple hats and the pressure to manage cybersecurity on top of their regular duties can lead to fatigue, missed threats, and higher turnover,” Exelby says. ... SMEs often mistakenly believe that cyber attackers only target larger organizations, but that’s often not the case — particularly because small business partners of larger companies are often deliberately targeted as part of supply chain attacks. “Threats are becoming more advanced but their resources aren’t keeping pace,” says Kristian Torode, director and co-founder of Crystaline, a specialist in SME cybersecurity. “Many SMEs are still relying on outdated systems or don’t have dedicated security teams in place, making them an easy target.” Torode adds: “They’re also seen by cybercriminals as an exploitable link in the supply chain, since they often work with larger enterprises.” “SMEs have traditionally been low-hanging fruit — with limited resources for cybersecurity training, advanced tools, or dedicated security teams,” Adam Casey, director of cybersecurity and CISO at cloud security firm Qodea, tells CSO. 


Want fewer security fires to fight? Start with threat modeling

Some CISOs begin with one critical system or pilot project. From there, they build templates, training materials, and internal champions who help scale the practice across teams. Incorporating threat modeling into an organization’s development lifecycle doesn’t have to be daunting. In fact, it shouldn’t be, according to David Kellerman, Field CTO of Cymulate. “The key is to start small and make threat modeling approachable,” Kellerman says. Rather than rolling out a heavyweight process full of complex methodologies, CISOs should look for ways to embed threat modeling into workflows that teams already use. “I advise CISOs to embed threat modeling into existing workflows, such as architecture reviews, design discussions, or sprint planning, rather than creating separate, burdensome exercises.” This lightweight, integrated approach not only reduces resistance but helps normalize secure thinking within engineering culture. “Use simple frameworks like STRIDE or basic attacker storyboarding that non-security engineers can easily grasp,” Kellerman explains. “Make it collaborative and educational, not punitive.” As teams gain familiarity and confidence, organizations can gradually evolve their threat modeling capabilities. “The goal isn’t to build a perfect threat model on day one,” Kellerman says. “It’s to establish a security mindset that grows naturally within engineering culture.”


Rethinking Success in Security: Why Climbing the Corporate Ladder Isn’t Always the Goal

In the security field, like in many other fields, there seems to be constant pressure to advance. For whatever reason, the choice to climb the corporate ladder seems to garner far more reverence and respect than the choice to develop expertise and skills in one particular area of specialization. In other words, the decision to go higher and broader seems to be lauded more than the decision to go deeper and more focused. Yet, both are important in their own right. There are certain times in a security professional’s career when they find themselves at a crossroads – confronted by this issue. One career path is not more “correct” than another one. Which direction is the right one is an individual choice where many factors are relevant. ... It is the sad reality of the security field that we don’t show our respect and appreciation for our colleagues enough. That being said, the respect is there. See, one important thing to keep in mind is that respect is earned – not ordained or otherwise granted. If you are a great security professional, people take notice. You shouldn’t feel compelled to attain a specific title, paygrade, or otherwise just to get some respect. The dirty secret in the industry is that just because someone is in a higher-level role, it doesn’t mean that people respect them. 


The AI data center boom: Strategies for sustainable growth and risk management

Data center developers are experiencing extended long lead times for critical equipment such as generators, switchgear, power distribution units (PDUs) and cooling systems. Global shortages in semiconductors and electrical components are still impacting timelines. Additionally, uncertainty regarding tariffs is further complicating procurement and planning processes, as potential changes in trade policies could affect the cost and availability of these essential components. ... Data center owners are increasingly trying to use low-carbon materials to decarbonize both the centers and construction operations. This approach includes concrete that permanently traps carbon dioxide and steel, which is powered using renewable energy. Microsoft is now building its first data centers made with structural mass timber to slash the use of steel and concrete, which are among the most significant sources of carbon emissions. ... Fires in data centers are typically caused by a breakdown of machinery, plant or equipment. A fire that spreads quickly can result in significant financial losses and business interruption. While the structures for data centers often have concrete frames that are not significantly impacted by fires, it’s the high-value equipment that drives losses – from cooling technology to high-end computer servers or graphic card components.


Managing software projects is a double-edged sword

Doing two platform shifts in six months was beyond challenging—it was absurd. We couldn’t have hacked together a half-baked version for even one platform in that time. It was flat-out impossible. Let’s just say I was quite unhappy with this request. It was completely unreasonable. My team of developers was being asked to work evenings and weekends on a task that was guaranteed to fail. The subtle implication that we were being rebellious and dishonest was difficult to swallow. So I set about making my position clear. I tried to stay level-headed, but I’m sure that my irritation showed through. I fought hard to protect my team from a pointless death march—my time in the Navy had taught me that taking care of the team was my top job. My protestations were met with little sympathy. My boss, who like me came from the software development tool company, certainly knew that the request was unreasonable, but he told me that while it was a challenge, we just needed to “try.” This, of course, was the seed of my demise. I knew it was an impossible task, and that “trying” would fail. How do you ask your team to embark on a task that you know will fail miserably and that they know will fail miserably? Well, I answered that question very poorly.


The CIO Has Evolved. It's Time the Board Catches Up

Across industries, CIOs have risen to meet the moment. They are at the helm of transformation strategies with business peers and drive digital revenue models. They even partner with CFOs to measure value, CMOs to reimagine customer experience and COOs to build data-driven models. ... CIOs have evolved. But if boards continue to treat them as back-room managers instead of strategic partners, they are underutilizing one of the strategic roles in the enterprise. ... In today's times, every company is a technology company. AI, automation, cloud and digital platforms aren't just enablers. They form the foundation for competitive advantage and new revenue models. Similarly, cybersecurity is no longer just an IT challenge, it's a board-level fiduciary responsibility. Boards, however, dominantly engage with CIOs in a transactional manner. Issues such as budget approvals, risk reviews and project updates are common conversations. CIOs are rarely invited into conversations related to growth strategy, market reinvention or long-term capital allocation. This disconnect is proving to be a strategic liability. ... In industries where technology is the differentiator, CIOs should not be in the boardroom, they should be shaping their agenda. Because if CIOs are empowered to lead, organizations don't just avoid risk, they build resilience, relevance and reinvention.

Daily Tech Digest - August 06, 2023

California Opens Privacy Probe Into Car Data Collection

Modern vehicles are equipped with a wide range of sensors, cameras, and other technologies that generate vast amounts of data. This data includes information about the vehicle’s location, speed, acceleration, braking, and even driver behavior. Additionally, connected car systems can collect data on music preferences, navigation history, and other personal preferences. Car data is collected by various parties, including automakers, technology companies, and third-party service providers. This data is used for a variety of purposes, such as improving vehicle performance, developing new features, and providing personalized services to consumers. However, concerns have been raised about the potential misuse or unauthorized access to this sensitive information. The investigation by the California Privacy Agency highlights the importance of protecting consumer privacy in the context of car data collection. As vehicles become more connected and autonomous, the amount of data being generated increases exponentially. 


An eventful week in the world of Arm and RISC-V

What’s most intriguing though with all of these coincidental events though is the NXP Semiconductors’ announcement. Almost all the initial investor companies announced in the new, unnamed organization, are also Arm licensees. The press release states: “Semiconductor industry players Robert Bosch GmbH, Infineon Technologies AG, Nordic Semiconductor, NXP Semiconductors, and Qualcomm Technologies, Inc., have come together to jointly invest in a company aimed at advancing the adoption of RISC-V globally by enabling next-generation hardware development.” So, was this strategically timed to coincide with Arm’s annual meet? What’s also intriguing is that the announcement says a new company has been formed but the company isn’t named. Maybe the disclaimer is the added statement that “the company formation will be subject to regulatory approvals in various jurisdictions.” The new unnamed company formed in Germany also “calls on industry associations, leaders, and governments, to join forces in support of this initiative which will help increase the resilience of the broader semiconductor ecosystem.”


How Agile Management Disrupts the Status Quo

As a relatively newer project management methodology, you might wonder how agile differs from the typical or traditional project or team management approach an organization might use—and how it disrupts those traditional approaches. Agile principles are designed to allow for more seamless collaboration, feedback, and flexibility to ensure faster and more thorough success in bringing high-quality products to market. Agile methodology and coaching should focus on bringing together stakeholders, developers, programmers, and end-users to support the underlying principles. This management methodology encourages and facilitates ongoing conversations and regular communication as a primary means of measuring progress with incremental development. However, “incremental” movement doesn’t necessarily translate to slowing down the process. In fact, team member input—and, importantly, user input—ultimately allows for a more effective, functional, and satisfying final product.


A Journey Through Software Development Paradigms

In the quest for seamless collaboration and integration between development and operations, we encounter DevOps, a paradigm that bridges the gap between siloed teams and fosters a culture of continuous integration, delivery, and learning. We explore the triumphs and challenges faced by organizations adopting DevOps, witnessing its potential to accelerate software delivery, improve quality, and enhance customer experiences. Beyond the familiar shores of Agile and DevOps, our journey ventures into the uncharted territories of emerging paradigms, each holding the promise of further transformation. Lean Software Development, Continuous Delivery, and Site Reliability Engineering (SRE) await our exploration, revealing new insights and practices that continue to shape the future of software development. As we reach the culmination of our voyage, we stand in awe of the pioneers and visionaries who have paved the way for progress, embracing adaptation and innovation in the pursuit of excellence. 


The Rise of Emotionally Aware Technology: A Deep Dive into Global Affective Computing

One of the key drivers behind the rise of affective computing is the increasing demand for personalized user experiences. Today’s consumers expect their devices to understand their needs and preferences and to respond accordingly. Emotionally aware technology can meet these expectations by adapting its responses based on the user’s emotional state. For example, a virtual assistant that can detect frustration in a user’s voice could offer to simplify its instructions or provide additional support. Another factor contributing to the growth of affective computing is the advancement in machine learning and AI technologies. These technologies enable computers to learn from data and improve their performance over time, making it possible for them to recognize and interpret complex human emotions. For instance, facial recognition software can now analyze subtle facial expressions to determine a person’s mood, while natural language processing can interpret the emotional tone in written text.


Digital twins: The key to smart product development

In advanced industries, survey data indicate that almost 75 percent of companies have already adopted digital-twin technologies that have achieved at least medium levels of complexity. There is significant variance between sectors, however. Players in the automotive—and aerospace and defense—industries appear to be more advanced in their use of digital twins today, while logistics, infrastructure, and energy players are more likely to be developing their first digital-twin concepts. One major aerospace company is developing a machine-learning-based geometry optimization system that can simulate thousands of different configurations at high speed to identify weight savings, aerodynamic improvements, and other performance benefits. A European software company is building a multiphysics model of the human heart to support drug and medical-device development. In the United States, an automotive company is building a system that can model all the software and hardware configurations it offers. The system will be used to simulate the effect of design improvements before they are delivered to customers as over-the-air updates. 


Four technology disruptions organizations must watch

Digital humans are becoming more and more like real people. They are readily available and have the ability to interact over a screen to handle a service-based issue or provide customer service instantly. As digital human software is integrated with natural language processing and robotic process automation tools, digital humans will become more of a presence in workflows of more and more processes. Consulting leaders should focus, both singly and in tandem, with leaders of other parts of an organization, on crafting approaches their clients can use to leverage a digital human workforce. Service delivery leaders — particularly within business process outsourcing providers — should be developing a strategy to deploy digital humans within their service delivery functions. ... A decentralized autonomous organization (DAO) is a digital entity, running on a blockchain (which provides a secure digital ledger for communication tracking), that can engage in business interactions with other DAOs, digital and human agents, as well as corporations, without conventional human management.


Bitcoin Beyond the Currency – the Disruption of Industries

The Bitcoin economy has the potential to become the biggest economy in the world; bigger than the United States or China. Bitcoin is a solution for everyone in the world who lives in fear of inflation risk, currency risk, or regime risk. A global, decentralized, trustless settlement layer and means of exchange with no state backing or intervention. For that to happen, BTC has to be more than a store value, it has to be a currency. We have to stop thinking about it in terms of market capitalization and start thinking about it in terms of a gross decentralized product, the “GDP” of the Bitcoin economy. One doesn’t talk about the market capitalization of the dollar, we shouldn’t think of Bitcoin in those terms either. Bitcoin is continuing to become increasingly vital as legacy institutions fall behind the strides being made in the technology sector. These breakthroughs are significantly disrupting incumbent industries ranging from those commonly considered such as banking and finance, to more unique industries such as insurance and energy.


Mitigating AI Risks: Tips for Tech Firms in a Rapidly Changing Landscape

Keep in mind: despite their capabilities, large language models can’t tell between what’s real and what’s not. And when asked to verify if something is true, they “frequently invent dates, facts, and figures.” While this stresses the importance of fact-checking on the end-user’s part, you could still face a lawsuit for defamation if any misleading information is published or shared with the public. In fact, ChatGPT-creator OpenAI is already being sued for libel after the system made false accusations against a radio host in the United States, claiming that he had embezzled funds from a non-profit organization. This is the first case of this nature against OpenAI, which could test the legal viability of any future AI-related defamation lawsuits. However, some legal experts believe the case may be challenging to maintain since there were no actual damages and OpenAI wasn’t notified about the claims or given the opportunity to remove them. Beyond defamation, tech firms that deploy large language models in user support systems can also face general liability risks relating to physical harm.


Data Democratization’s Impact on Users and Governance

A key result of increased user involvement in the nuts and bolts of data is the increased importance of data literacy throughout the organization, Stodder added. “It’s essential for organizations to understand what their current capabilities are and to make a plan to address any stumbling block they’re having.” Training tailored to the full range of user personas, from advanced users to more basic data consumers, will be critical to any data democratization effort. ... Another critical aspect of a democratization effort is an effective governance program. “Organizations can easily expand their data programs faster than they expand their governance programs,” Stodder explained, “which, given the existing strain placed on governance by regulations and the complexity of the data landscape, can only compound the problems.” Some of these governance issues can also be exacerbated by the distributed nature of a democratized landscape. “Many organizations are trying to consolidate to a kind of hub-and-spoke model,” Stodder said, “which has been effective for many of them. 



Quote for the day:

“When something is important enough, you do it even if the odds are not in your favor.” --
Elon Musk

Daily Tech Digest - July 09, 2022

Ray Kurzweil Wants to Upload Your Brain to the Cloud

Well, this can go one of two ways. Either this brain/cloud situation will be an incredibly beneficial superpower, or it could be just another farming device for data mining and ad sales. My take: If it’s a beneficial superpower then it won’t be given to the general public. Superpower for the rich. Farming device for the regular people. And thank you very much but I am farmed enough. My Hinge updates don’t need to be sent to my cerebellum. I can’t talk about taking a trip to Costa Rica without flights popping up on my phone. I’m grateful for the ways technology has touched my life but let me remind people about the Flo app. This is a period and fertility tracking app that settled with the FTC in May for selling its users’ personal health data without their knowledge. While there are definitely huge potential advances that could be made from brain/cloud merges, I can only think of social media companies that are designed to addict us, with at least one of these apps in the recent past tracking our eye movements to see what we liked so we could be coaxed to spend more time using it. It’s not all bad but I am not looking to plug in forever. And I don’t trust these companies to do good.


NIST’s pleasant post-quantum surprise

To understand the risk, we need to distinguish between the three cryptographic primitives that are used to protect your connection when browsing on the Internet: Symmetric encryption - With a symmetric cipher there is one key to encrypt and decrypt a message. They’re the workhorse of cryptography: they’re fast, well understood and luckily, as far as known, secure against quantum attacks. ... Symmetric encryption alone is not enough: which key do we use when visiting a website for the first time? We can’t just pick a random key and send it along in the clear, as then anyone surveilling that session would know that key as well. You’d think it’s impossible to communicate securely without ever having met, but there is some clever math to solve this. Key agreement - also called a key exchange, allows two parties that never met to agree on a shared key. Even if someone is snooping, they are not able to figure out the agreed key. Examples include Diffie–Hellman over elliptic curves, such as X25519. The key agreement prevents a passive observer from reading the contents of a session, but it doesn’t help defend against an attacker who sits in the middle and does two separate key agreements: one with you and one with the website you want to visit.


Buggy 'Log in With Google' API Implementation Opens Crypto Wallets to Account Takeover

The first bug involved the common feature found in mobile apps that allow users to log in using an external service, like Apple ID, Google, Facebook, or Twitter. In this case, the researchers examined the "log in with Google" option — and found that the authentication token mechanism could be manipulated to accept a rogue Google ID as being that of the legitimate user. The second bug allowed researchers to get around two-factor authentication. A PIN-reset mechanism was found to lack rate-limiting, allowing them to mount an automated attack to uncover the code sent to a user's mobile number or email. "This endpoint does not contain any sort of rate limiting, user blocking, or temporary account disabling functionality. Basically, we can now run the entire 999,999 PIN options and get the correct PIN within less than 1 minute," according to the researchers. Each security issue on its own provided limited abilities to the attacker, according to the report. "However, an attacker could chain these issues together to propagate a highly impactful attack, such as transferring the entire account balance to his wallet or private bank account."


How To Become A Self-Taught Blockchain Developer

The Blockchain developer must provide original solutions to complex issues, such as those involving high integrity and command and control. A complicated analysis, design, development, test, and debugging of computer software are also performed by the developer, particularly for particular product hardware or for technical service lines of companies. Develops carry out computer system selection, operating architecture integration, and program design. Finally, they use their understanding of one or more platforms and programming languages while operating on a variety of systems. There will undoubtedly be challenges for the Blockchain developer. For instance, the developer must fulfill the criteria of a Blockchain development project despite using old technology and its restrictions. A Blockchain developer needs specialized skills due to the difficulties in understanding the technological realities of developing decentralized cryptosystems, processes that are beyond the normal IT development skill-set. 


Machine learning begins to understand human gut

While human gut microbiome research has a long way to go before it can offer this kind of intervention, the approach developed by the team could help get there faster. Machine learning algorithms often are produced with a two step process: accumulate the training data, and then train the algorithm. But the feedback step added by Hero and Venturelli's team provides a template for rapidly improving future models. Hero's team initially trained the machine learning algorithm on an existing data set from the Venturelli lab. The team then used the algorithm to predict the evolution and metabolite profiles of new communities that Venturelli's team constructed and tested in the lab. While the model performed very well overall, some of the predictions identified weaknesses in the model performance, which Venturelli's team shored up with a second round of experiments, closing the feedback loop. "This new modeling approach, coupled with the speed at which we could test new communities in the Venturelli lab, could enable the design of useful microbial communities," said Ryan Clark, co-first author of the study, who was a postdoctoral researcher in Venturelli's lab when he ran the microbial experiments.


Jorge Stolfi: ‘Technologically, bitcoin and blockchain technology is garbage’

It is the only thing that blockchain could contribute: the absence of a central authority. But that only creates problems. Because to have a decentralized database you have to pay a very high price. You must ensure that all miners do “proof of work.” It takes longer, and it is not even secure because in the past there have been occasions where they have had to rewind several hours worth of blocks to remove a bad transaction, in 2010 and 2013. The conditions that made that possible are still there and that’s why blockchain technology is a fraud: it promises to do something that people already know how to do. ... It is the only digital system that does not follow customary money laundering laws. That’s why criminals use it. Once you have paid a ransom, there is no way for the victim to cancel the payment and get the money back, not even the government can do it easily. It is anonymous and when a hacker encrypts your data, they do not have to enter your system directly, where they would leave a trace. He has botnets, computers that he has already hacked, so tracking him down is difficult.   


How to Write Secure Source Code for Proprietary Software

Source code is at the mercy of developers and anyone else that has access to it. That means limiting access to your source code and establishing security guidelines for those with access is vital for increasing security. It's also important to realize that insider threat actors aren't always malicious. Often, insider threats come from mistakes or negligent actions taken by employees. ...  Outside threats come from outside of your development team. They may come from competitors that want to use the code to improve their own. Or, they can come from hackers who will attempt to sell your source code or pick it apart looking for vulnerabilities. The point is, whether a leak comes from inside or outside threats, it can have terrible consequences. Source code leaks can lead to additional attacks, exposing large amounts of sensitive data. Source code leaks can also lead to financial losses by giving competitors an advantage. And your customers will think twice before dealing with a developer that has exposed valuable customer data in the past.


How IoT and digital twins could help CIOs meet ESG pledges

This inevitably leads to accusations of greenwashing, where marketing departments hijack the ambitions of organisations before any serious, robust plan is in place. For CIOs tasked with bringing down emissions and adhering to targets, this can be a huge problem. A recent IBM CEO study finds that CEOs are coming under increasing pressure from stakeholders to act on sustainability. It cites “frustrations” with organisations’ “all talk and no action”. Culture is seen as a significant issue in hampering any attempts to co-ordinate carbon emission strategies. “If you want to avoid the trap of greenwashing, it needs to start with the CEO,” says Alicia Asín, CEO of Libelium, an IoT business based in Zaragoza, Spain. Asín, speaking on a panel at IoT World Congress, added that this creates a culture where the whole organisation needs to look at the design and sustainability credentials of every technology offering for every sustainable project. She used an example of a farm customer that is using IoT to reduce the amount of water in irrigation and to reduce the level of pesticides being used on their crops.


GitHub Copilot is the first real product based on large language models

The success of GitHub Copilot and Codex underline one important fact. When it comes to putting LLMs to real use, specialization beats generalization. When Copilot was first introduced in 2021, CNBC reported: “…back when OpenAI was first training [GPT-3], the start-up had no intention of teaching it how to help code, [OpenAI CTO Greg] Brockman said. It was meant more as a general purpose language model [emphasis mine] that could, for instance, generate articles, fix incorrect grammar and translate from one language into another.” But while GPT-3 has found mild success in various applications, Copilot and Codex have proven to be great hits in one specific area. Codex can’t write poetry or articles like GPT-3, but it has proven to be very useful for developers of different levels of expertise. Codex is also much smaller than GPT-3, which means it is more memory and compute efficient. And given that it has been trained for a specific task as opposed to the open-ended and ambiguous world of human language, it is less prone to the pitfalls that models like GPT-3 often fall into.


LockBit explained: How it has become the most popular ransomware

After obtaining initial access to networks, LockBit affiliates deploy various tools to expand their access to other systems. These tools involve credential dumpers like Mimikatz; privilege escalation tools like ProxyShell, tools used to disable security products and various processes such as GMER, PC Hunter and Process Hacker; network and port scanners to identify active directory domain controllers, remote execution tools like PsExec or Cobalt Strike for lateral movement. The activity also involves the use of obfuscated PowerShell and batch scripts and rogue scheduled tasks for persistence. Once deployed, the LockBit ransomware can also spread to other systems via SMB connections using collected credentials as well as by using Active Directory group policies. When executed, the ransomware will disable Windows volume shadow copies and will delete various system and security logs. The malware then collects system information such as hostname, domain information, local drive configuration, remote shares and mounted storage devices then will start encrypting all data on the local and remote devices it can access.



Quote for the day:

"If you want people to to think, give them intent, not instruction." -- David Marquet

Daily Tech Digest - March 12, 2022

The Similarities and Differences between ITIL 4 and VeriSM

Even though ITIL has been around for many years and is considered the de facto best practice framework for IT service management (ITSM), VeriSM emerged in 2018 to find its place in the market. And this came before the launch of ITIL 4 from AXELOS in February 2019. VeriSM’s publication introduced some modern approaches in service management such as Agile and shift-left among others. ITIL 4, once released, also incorporated these modern concepts that have conquered the IT world during the last few years. VeriSM claims not to be a body of service management best practice but is instead an approach where the key facet of the model (it’s not a process flow, nor a set of procedures) is the Management Mesh where all the popular management practices (ITIL, COBIT, ISO/IEC 20000, CMMI-SVC, DevOps, Agile, Lean, SIAM, etc.) and emerging technologies and trends (artificial intelligence (AI), containerization, the Internet of Things (IoT), big data, cloud, shift-left, continuous delivery, CX/UX, etc.) are included. Maybe there’s some truth in this statement. 


Solo.io Intros Gloo Mesh Enterprise 2.0

Introduced last year, Gloo Mesh Enterprise is an Istio-based Kubernetes-native solution for multicluster and multimesh service mesh management. New features in 2.0 such as multitenant workspaces enable users to set fine-grained access control and editing permissions based on roles for shared infrastructure, enabling teams to collaborate in large environments. Users can manage traffic, establish workspace dependencies, define cluster namespaces, and control destinations directly in the UI. And the policies can be re-used and adapted using labels. Gloo Mesh Enterprise 2.0 also features a new Gloo Mesh API for Istio management enables developers to configure rules and policies for both north-south traffic and east-west traffic from a single, unified API. The new API also simplifies the process of expanding from a single cluster to dozens or hundreds of clusters. And the new Gloo Mesh UI for observability provides service topology graphs that highlight network traffic, latency, and speeds while automatically saving the new state when you move clusters or nodes. 


Introducing Community Security Analytics

You can use CSA to further investigate high-fidelity security findings from Security Command Center (SCC) and correlate them with logs for decision-making. For example, you may use a CSA query to get the list of admin activity performed by a newly created service account key flagged by Security Command Center in order to validate any malicious activity. It’s important to note that the detection queries provided by CSA will be self-managed and you may need to tune to minimize alert noise. If you’re looking for managed and advanced detections, take a look at SCC Premium’s growing threat detection suite which provides a list of regularly-updated managed detectors designed to identify threats within your systems in near real-time. CSA is not meant to be a comprehensive, managed set of threat detections, but a collection of community-contributed sample analytics to give examples of essential detective controls, based on cloud techniques. Use CSA in conjunction with our threat detection and response capabilities in conjunction with our threat prevention capabilities.


µTransfer: A technique for hyperparameter tuning of enormous neural networks

Our theory of scaling enables a procedure to transfer training hyperparameters across model sizes. If, as discussed above, µP networks of different widths share similar training dynamics, they likely also share similar optimal hyperparameters. Consequently, we can simply apply the optimal hyperparameters of a small model directly onto a scaled-up version. We call this practical procedure µTransfer. If our hypothesis is correct, the training loss-hyperparameter curves for µP models of different widths would share a similar minimum. Conversely, our reasoning suggests that no scaling rule of initialization and learning rate other than µP can achieve the same result. This is supported by the animation below. Here, we vary the parameterization by interpolating the initialization scaling and the learning rate scaling between PyTorch default and µP. As shown, µP is the only parameterization that preserves the optimal learning rate across width, achieves the best performance for the model with width 213 = 8192, and where wider models always do better for a given learning rate—that is, graphically, the curves don’t intersect.


Will Transformers Take Over Artificial Intelligence?

Transformers quickly became the front-runner for applications like word recognition that focus on analyzing and predicting text. It led to a wave of tools, like OpenAI’s Generative Pre-trained Transformer 3 (GPT-3), which trains on hundreds of billions of words and generates consistent new text to an unsettling degree. The success of transformers prompted the AI crowd to ask what else they could do. The answer is unfolding now, as researchers report that transformers are proving surprisingly versatile. In some vision tasks, like image classification, neural nets that use transformers have become faster and more accurate than those that don’t. Emerging work in other AI areas — like processing multiple kinds of input at once, or planning tasks — suggests transformers can handle even more. “Transformers seem to really be quite transformational across many problems in machine learning, including computer vision,” said Vladimir Haltakov, who works on computer vision related to self-driving cars at BMW in Munich. Just 10 years ago, disparate subfields of AI had little to say to each other. But the arrival of transformers suggests the possibility of a convergence.


The Questionable Ethics Of Bitcoin ESG Junk Science

In February 2022, an op-ed, titled “Revisiting Bitcoin’s Carbon Footprint,” was published in the scientific journal “Joule,” authored by four researchers: Alex de Vries, Ulrich Gallersdörfer, Lena Klaaßen and Christian Stoll. Their written commentary, which admits limitations in their estimates, states that as bitcoin miners migrated from China to Kazakhstan and the United States in 2021, the network’s carbon footprint increased to 0.19% of global emissions. What went unnoticed by the media was that the researchers have professional motives to overstate Bitcoin’s relatively tiny environmental impact. The op-ed’s lead author, Alex de Vries, failed to disclose that he is employed by De Nederlandsche Bank (DNB), the Dutch central bank. Central banks are no fans of open, global payment rails, which bypass monopolistic government settlement layers. De Vries first released his “Bitcoin Energy Consumption Index” in November 2016, which coincides with his first round of employment with DNB, giving the appearance that DNB encouraged his critique of Bitcoin’s energy consumption. 


DBaaS and the Enterprise

From a DBA perspective (and being a former DBA myself), I always enjoyed working on more challenging issues. Mundane operations like launching servers and setting up backups make for a less-than-exciting daily work experience. When managing large fleets, these operations make up the majority of the work. As applications grow more complex and data sets grow rapidly, it is much more interesting to work with the application teams to design and optimize the data tier. Query tuning, schema design, and workflow analysis are much more interesting (and often beneficial) when compared to the basic setup. DBAs are often skilled at quickly identifying issues and understanding design issues before they become problems. When an enterprise adopts a DBaaS model, this can free up the DBAs to work on more complex problems. They are also able to better engage and understand the applications they are supporting. A common comment I get when discussing complex tickets with clients is: “well, I have no idea what the application is doing, but we have an issue with XYZ”.


How to Develop Strategies that Close the Leadership Gap with the Generation Gap

The leadership gap that has been forecasted for the past several years is upon us. And, it could not have come at a worse time with the Covid-19 pandemic still underway, impacting each of the multiple generations in the workforce differently. Many companies are unable to keep pace with their need to fill leadership openings created by Baby Boomers taking retirement and by companies expanding, in some cases at rapid rates. Their pipelines are not sufficient to fill the increasing number of leadership openings promptly. Companies that lack a focused strategy and drive to close this gap might very well find themselves struggling to stay in business and maintain their market share. The significant numbers of Baby Boomers taking retirement for the past ten years have only exacerbated the leadership gap. Many of them are leaving their leadership roles for their well-earned leisure lifestyle. In the third quarter of 2020, the number of Boomers who retired increased by over three million from the same quarter in 2019. 


How Digital Transformation is Rebuilding the Construction Industry

As construction companies continue to comply with pandemic restrictions, technology has been essential to the implementation of health and safety measures. For instance, firms can use wearables and AI sensors to detect when workers are not maintaining proper physical distance. Some construction projects are even using contact tracing devices that alert employees when there are too many personnel at a worksite; these can identify potentially infected individuals in the event of a confirmed COVID-19 case. These measures not only prioritize employee safety, but also help companies avoid entire site shutdowns. Even remotely, technology is a vital asset to construction firms. With fewer personnel allowed on-site, companies can rely on new cloud-based video platforms to assist with site monitoring. In the city of Miami, virtual inspections of construction sites through either a Zoom or a Microsoft Teams video call are now routine between engineers on site and building control officials. With usage tripling in 2020 alone, drones are also being used more frequently to improve mapping and surveying processes.


It’s not a Great Resignation–it’s a Great Rethink

Leaders often regard purpose in a limited way as either a marketing or human resources exercise. Companies that go deepest with purpose take a much more comprehensive approach, treating purpose as an operating system and embedding it in processes, organizational structures, and culture. Global professional services firm EY adopted a system of metrics to spur behaviors associated with its purpose. “Companies really have to be able to show what they’re doing,” EY’s CEO Carmine Di Sibio told me. “They get into trouble when they talk a lot about purpose and it’s just talk.” Imagine what it feels like when everything about your work ties back in clear, even obvious ways to your purpose. That’s what employees at deep-purpose companies experience on the job. It’s encouraging that some CEOs—68% of those queried in one survey—are placing “more emphasis” on purpose, but that’s not enough. For purpose to feel genuine and meaningful, they must live it in their daily work, hold others accountable for acting in ways congruent with that purpose, and bring it alive for their workforce.



Quote for the day:

"The essence of leadership is the willingness to make the tough decisions. Prepared to be lonely." -- Colin Powell

Daily Tech Digest - February 04, 2022

5 steps to run a successful cybersecurity champions program

A solid program helps to demystify cybersecurity for non-security employees and bring it into the scope of their own specialties. “People often think cybersecurity is an unfathomable, deeply technical concept, but having security champions creates an open dialogue, and couching security issues and threats in ways that are relevant to specific roles is the best way to navigate this issue,” Dr. John Blythe, chartered psychologist and director of cyber workforce psychology at cybersecurity training firm Immersive Labs, tells CSO. “Engineers need to understand how to write secure code, executive teams need to know how to respond better in a cyber crisis, and IT teams need to know how to secure cloud infrastructure.” Once trained, cybersecurity champions can identify and address cybersecurity vulnerabilities before they become widespread and problematic, adds Dominic Grunden, CISO of UnionDigital Bank. “In doing so, they can save organizations significant time and money in the process,” he says. Grunden has integrated and run cybersecurity champions programs across multiple industries and organizations.


Charming Kitten Sharpens Its Claws with PowerShell Backdoor

Charming Kitten is now using what researchers have dubbed PowerLess Backdoor, a previously undocumented PowerShell trojan that supports downloading additional payloads, such as a keylogger and an info stealer. The team also discovered a unique new PowerShell execution process related to the backdoor aimed at slipping past security-detection products, Frank wrote. “The PowerShell code runs in the context of a .NET application, thus not launching ‘powershell.exe’ which enables it to evade security products,” he wrote. Overall, the new tools show Charming Kitten developing more “modular, multi-staged malware” with payload-delivery aimed at “both stealth and efficacy,” Frank noted. The group also is leaning heavily on open-source tools such as cryptography libraries, weaponizing them for payloads and communication encryption, he said. This reliance on open-source tools demonstrates that the APT’s developers likely lack “specialization in any specific coding language” and possess “intermediate coding skills,” Frank observed.


Toward a 3-Stage Software Development Lifecycle

No one wants someone else to find the fault in their code, but at the same time, developers aren’t crazy about running tests, whether that means writing tests or doing manual tests. But developers shouldn’t have to ship their code off to a testing environment to discover if it works or not and worry about the embarrassment of having a colleague find an error. Robust change validation does just that — it allows developers to spot potential errors before the application or update leaves the local environment. Validating changes involves not only testing that the code is good, but also that it works as expected with all dependencies. When a change isn’t validated, the developer knows immediately and can fix the problem and try again immediately. And when a change is validated, the developer can push the code into the production-like environment with confidence, knowing that it’s been tested against upstream and downstream dependencies and won’t cause anything to break unexpectedly. The idea is that as much hardening as possible happens in the development environment so that developers can ship with maximum confidence. 


The Making of Atomic CSS: An Interview With Thierry Koblentz

Everything was fair game and everything was abused as we had a very limited set of tools with the demand to do a lot. But things had changed dramatically by the time I joined Yahoo!. Devs from the U.K. were strong supporters of Web Standards and I credit them for greatly influencing how HTML and CSS were written at Yahoo!. Semantic markup was a reality and CSS was written following the Separation of Concern (SoC) principle to the “T”. YUI had CSS components but did not have a CSS framework yet. ... Every Yahoo! design team had their own view of what was the perfect font size, the perfect margin, etc., and we were constantly receiving requests to add very specific styles to the library. That situation was unmaintainable so we decided to come up with a tool that would let developers create their own styles on the fly, while respecting the Atomic nature of the authoring method. And that’s how Atomizer was born. We stopped worrying about adding styles — CSS declarations — and instead focused on creating a rich vocabulary to give developers a wide array of styling, like media queries, descendant selectors, and pseudo-classes, among other things.


Cyber Signals: Defending against cyber threats with the latest research, insights, and trends

Cyber Signals aggregates insights we see from our research and security teams on the frontlines. This includes analysis from our 24 trillion security signals combined with intelligence we track by monitoring more than 40 nation-state groups and over 140 threat groups. In our first edition, we unpack the topic of identity. Our identities are made up of everything we say and do in our lives, recorded as data that spans across a sea of apps and services. While this delivers great utility, if we don’t maintain good security hygiene our identities are at risk. And over the last year, we have seen identity become the battleground for security. While threats have been rising fast over the past two years, there has been low adoption of strong identity authentication, such as multifactor authentication (MFA) and passwordless solutions. For example, our research shows that across industries, only 22 percent of customers using Microsoft Azure Active Directory (Azure AD), Microsoft’s Cloud Identity Solution, have implemented strong identity authentication protection as of December 2021. 


Bitcoin encryption is safe from quantum computers – for now

In the second part of the study, the team calculated the number of physical qubits needed to break the encryption used for Bitcoin transactions. Marek Narozniak, a physicist at New York University (NYU) in the US who was not part of the study, points out that this question – whether cryptocurrencies are safe against quantum computer attacks – comes with additional constraints not present in the FeMo-co simulation. While a 10-day computation time may be acceptable for FeMo-co simulations, Narozniak notes that the Bitcoin network is set up so that a hacker armed with an error-correcting quantum computer would have a very limited time to decrypt information and steal funds. According to Webber and collaborators, breaking Bitcoin encryption within one hour – a time window within which transactions may be vulnerable – would take about three hundred million qubits. Based on this result, Narozniak concludes that “Bitcoin is pretty safe”, although he warns that not all cryptocurrencies operate the same way. “There are other cryptocurrencies that work differently, and they have different algorithms that could be more vulnerable,” he says.


The 5 Big Risk Vectors of DeFi

Intrinsic protocol risk in DeFi comes in all shapes. In DeFi lending protocols such as Compound or Aave, liquidations is a mechanism that maintains lending markets collateralization at appropriate levels. Liquidations allow participants to take part of the principal in uncollateralized positions. Slippage is another condition present in automated market making (AMM) protocols such as Curve. High slippage conditions in Curve pools can force investors to pay extremely high fees to remove liquidity supplied to a protocol. ... A unique aspect of DeFi, decentralized governance proposals control the behavior of a DeFi protocol and, quite often, are the cause of changes in its liquidity composition in affecting investors. For instance, governance proposals that alter weights in AMM pools or collateralization ratios in lending protocols typically help liquidity flow in or out of the protocol. A more concerning aspect of DeFi governance from the risk perspective is the increasing centralization of the governance structure of many DeFi protocols. Even though DeFi governance models are architecturally decentralized, many of them are controlled by a small number of parties that can influence the outcome of any proposal.


Thousands of Malicious npm Packages Threaten Web Apps

Because npm packages in general are being downloaded upwards of 20 billion times a week—and thus installed across countless web-facing components of software and applications across the world–exploiting them means a sizeable playing field for attackers, researchers said in their Wednesday report. ... That level of activity enables threat actors to launch a number of software supply-chain attacks, researchers said. Accordingly, WhiteSource investigated malicious activity in npm, identifying more than 1,300 malicious packages in 2021 — which were subsequently removed, but may have been brought into any number of applications before they were taken down. “Attackers are focusing more efforts on using npm for their own nefarious purposes and targeting the software supply chain using npm,” they wrote in the report. “In these supply-chain attacks, adversaries are shifting their attacks upstream by infecting existing components that are distributed downstream and installed potentially millions of times.”


Retail in the metaverse: considerations for brands

While the metaverse may be a golden ticket of new opportunities for retailers, there are very real security risks that also need to be considered. For example, in this new, yet to be explored metaverse, there’s the danger of consumers coming across a virtual man wearing a trench coat selling fake designer watches. How can consumers tell if he is selling the genuine article, or fakes? Ensuring your brand and products are protected in the metaverse is going to be a serious challenge. Monitoring for intellectual property (IP) infringement across the various components of the metaverse will not be easy. If consumers have bad experiences in the metaverse, such as buying fake goods, this can cause great harm to their reputation online and in the real world too. This is why it is important that they take steps to protect their brand and IP as well as their customers. It has been suggested that operators within the metaverse should establish strategies for protecting users’ IP, in the same vein to how YouTube, eBay and Amazon work to protect rights holders from illicit activity on their platforms. 


Where have all the global network aggregators gone?

The best way to leverage SD-WAN’s potential for reducing network transport costs is to have a portfolio strategy with respect to selecting internet transport suppliers. Through robust acquisition processes and detailed coverage and cost modeling, enterprises need to come up with a manageable number of vetted suppliers that meet their global transport requirements. Potential suppliers should be evaluated based on provisioning, SLAs, operational and service support, and commercial and contractual terms, for example. Typically, for larger businesses, a portfolio approach may include one global or strong regional aggregator alongside two-to-five other telecom service providers with different provisioning models. Having a small handful of suppliers as go-to partners rather than a single supplier can yield significant cost savings – and those savings can run 30% higher compared to a single-source supplier approach. If managed correctly, a portfolio approach also offers a way to keep performance/pricing pressure on competing suppliers. 



Quote for the day:

"Don't focus so much on who is following you, that you forget to lead." -- E'yen A. Gardner

Daily Tech Digest - November 21, 2021

Ransomware Phishing Emails Sneak Through SEGs

The original email purported to need support for a “DWG following Supplies List,” which is supposedly hyperlinked to a Google Drive URL. The URL is actually an infection link, which downloaded an .MHT file. “.MHT file extensions are commonly used by web browsers as a webpage archive,” Cofense researchers explained. “After opening the file the target is presented with a blurred out and apparently stamped form, but the threat actor is using the .MHT file to reach out to the malware payload.” That payload comes in the form of a downloaded .RAR file, which in turn contains an .EXE file. “The executable is a DotNETLoader that uses VBS scripts to drop and run the MIRCOP ransomware in memory,” according to the analysis. ... “Its opening lure is business-themed, making use of a service – such as Google Drive – that enterprises employ for delivering files,” the researchers explained. “The rapid deployment from the MHT payload to final encryption shows that this group is not concerned with being sneaky. Since the delivery of this ransomware is so simple, it is especially worrying that this email found its way into the inbox of an environment using a SEG.”


How Decentralized Finance Will Impact Business Financial Services

In essence, DeFi aims to provide a worldwide, decentralized alternative to every financial service now available, such as insurance, savings, and loans. DeFi’s primary goal is to offer financial services to the world’s 1.7 billion unbanked individuals. And this is possible because DeFi is borderless. These financial services are available to anybody with a smartphone and internet connection in any part of the world. For the impoverished and unbanked, this will revolutionize banking. They can invest anywhere in the world in anything with just the touch of a button. By providing open access for all, DeFi empowers individuals and businesses to maintain greater control over their assets and gives them the financial freedom to select how to invest their money without relying on any intermediary. DeFi is also censorship-resistant, making it immune from government intervention. Furthermore, sending money across borders is extremely costly under the existing system. DeFi eliminates the need for costly intermediaries, allowing for better interest rates and lower expenses, while also democratizing banking systems.


Addressing the Low-Code Security Elephant in the Room

What are some development choices about the application layer that affect the security responsibility? If the low-code application is strictly made up of low-code platform native capabilities or services, you only have to worry about the basics. That includes application design and business logic flaws, securing your data in transit and at rest, security misconfigurations, authentication, authorizing and adhering to the principle of least-privilege, providing security training for your citizen developers, and maintaining a secure deployment environment. These are the same elements any developer — low-code or traditional — would need to think about in order to secure the application. Everything else is handled by the low-code platform itself. That is as basic as it gets. But what if you are making use of additional widgets, components, or connectors provided by the low-code platform? Those components — and the code used to build them — are definitely out of your jurisdiction of responsibility. You may need to consider how they are configured or used in your application, though.


Google Introduces ClusterFuzzLite Security Tool for CI/CD

ClusterFuzzLite enables you to run continuous fuzzing on your Continuous integration and delivery (CI/CD) pipeline. The result? You’ll find vulnerabilities more easily and faster than ever before. This is vital. A 2020 GitLab DevSecOps survey found that, while 81% of developers believed fuzz testing is important, only 36% were actually using fuzzing. Why? Because it was too much trouble to set fuzzing up and integrate it with their CI/CD systems. At the same time, though, as Shuah Khan, kernel maintainer and the Linux Foundation’s third Linux Fellow, has pointed out “It is easier to detect and fix problems during the development process,” than it is to wait for manual testing or quality assurance later in the game. By feeding unexpected or random data into a program, fuzzing catches bugs that would otherwise slip past the most careful eyeballs. NIST’s guidelines for software verification specify fuzzing as a minimum standard requirement for code verification. After all as Dan Lorenc, founder and CEO of Chainguard and former Google open source security team software engineer, recently told The New Stack, 


Bitcoin Is How We Really Build A New Financial System

When it comes to a foundational sound money, Bitcoin is unmatched. Compared to other blockchain assets, Bitcoin has had an immaculate conception.Also, Bitcoin has an elegantly simple monetary policy and an immutable supply freed from human discretion – something no other cryptocurrency asset can provide. Bitcoin's monetary policy is based on algorithmically-determined parameters and is thus perfectly predictable, rule-based and neither event- nor emotion-driven. By depoliticizing monetary policy and entrusting money creation to the market according to rule-based parameters, Bitcoin’s monetary asset behaves as neutrally as possible. Bitcoin is truly sound money since it provides the highest degree of stability, reliability and security. Most crypto enthusiasts would probably object that while Bitcoin might be the soundest money, its technical capabilities do not allow for DeFi to be built on top of it. As a matter of fact though, nothing could be further from the truth. 


A Simple 5-Step Framework to Minimize the Risk of a Data Breach

The first step businesses need to take to increase the security of their customer data is to review what types of data they're collecting and why. Most companies that undertake this exercise end up surprised by what they find. That's because, over time, the volume and variety of customer information that gets collected to expand well beyond a business's original intent. For example, it's fairly standard to collect things like a customer's name and email address. And if that's all a business has on file, they won't be an attractive target to an attacker. But if the business has a cloud call center or any type of high touch sales cycle or customer support it probably collects home addresses, financial data, and demographic information, they've then assembled a collection that's perfect for enabling identity theft if the data got out into the wild. So, when evaluating each collected data point to determine its value, businesses should ask themselves: what critical business function does this data facilitate. If the answer is none, they should purge the data and stop collecting it. 


To Monitor or Not to Monitor a Model — Is there a question?

Evidently AI works by analyzing the training and production datasets. It maps the data from features in the training data to their counterparts in the production data. ... Thereafter it runs different statistical tests depending on the input. Evidently AI then creates graphs that are based on the plotly python library, and you can read more about the code in their open-source GitHub repository. For binary categorical features, it performs a simple Z-test for a difference in proportions to verify if there is a statistically significant difference in how often the training and production data have one of the two values for the binary variable. For multivariate categorical features, it performs a chi-squared test, which aims to see if the distribution of the variable in the production data is likely based on the distribution in the training data. Finally, for numeric features, it performs a two-sample Kolmogorov-Smirnov test for goodness of fit that assesses the distributions of the feature in the training and production data to see if they are likely to be the same distribution, or if they vary from each other significantly.


IBM’s latest quantum chip breaks the elusive 100-qubit barrier

The Eagle is a quantum processor that is around the size of a quarter. Unlike regular computer chips, which encode information as 0 or 1 bits, quantum computers can represent information in something called qubits, which can have a value of 0, 1, or both at the same time due to a unique property called superposition. By holding over 100 qubits in a single chip, IBM says that Eagle could increase the “memory space required to execute algorithms,” which would in theory help quantum computers take on more complex problems. “People have been excited about the prospects of quantum computers for many decades because we have understood that there are algorithms or procedures you can run on these machines that you can’t run on conventional or classical computers,” says David Gosset, an associate professor at the University of Waterloo’s Institute for Quantum Computing who works on research with IBM, “which can accelerate the solution of certain, specific problems.”


Industrial computer vision is getting ready for growth

Industrial applications, however, present some unique challenges for computer vision systems. Many organizations can’t use pretrained machine learning models that have been tuned to publicly available data. They need models that are trained on their specific data. Sometimes, those organizations don’t have enough data to train their ML models from scratch, so they need to go through some more complicated processes, such as pretraining the model on a general dataset and then finetuning it on their own labeled images. The challenges of industrial computer vision are not limited to data. Sometimes, sensitivities such as safety or transparency impose special requirements on the type of algorithm and accuracy metrics used in industrial computer vision systems. And the team running the model needs an entire MLOps stack to monitor model performance, iterate across models, maintain different versions of the models, and manage a pipeline for gathering new data and retraining the models.


Three Big Myths About Decentralized Finance

Because the blockchain uses so many distinct sources to verify and record what happens within the system, there is also a common misconception that decentralized finance is inherently safer than centralized systems run by a single financial institution. After all, if thousands of sources check my transactions, won't they be able to identify and prevent anyone trying to use my account without my permission? Not necessarily. While it's true that the blockchain does help to safeguard against administrative or accounting errors — as happened recently with one family who mistakenly received $50 billion in their account — it also removes the safeguards that centralized financial businesses provide. Most of today's largest financial institutions have been around for decades. Over the years, federal and industry regulation have been put in place to provide safeguards against fraud. Navigating these safeguards can no doubt be tiresome, but they do provide valuable protections.




Quote for the day:

"A leadership disposition guides you to take the path of most resistance and turn it into the path of least resistance." -- Dov Seidman