Daily Tech Digest - March 12, 2022

The Similarities and Differences between ITIL 4 and VeriSM

Even though ITIL has been around for many years and is considered the de facto best practice framework for IT service management (ITSM), VeriSM emerged in 2018 to find its place in the market. And this came before the launch of ITIL 4 from AXELOS in February 2019. VeriSM’s publication introduced some modern approaches in service management such as Agile and shift-left among others. ITIL 4, once released, also incorporated these modern concepts that have conquered the IT world during the last few years. VeriSM claims not to be a body of service management best practice but is instead an approach where the key facet of the model (it’s not a process flow, nor a set of procedures) is the Management Mesh where all the popular management practices (ITIL, COBIT, ISO/IEC 20000, CMMI-SVC, DevOps, Agile, Lean, SIAM, etc.) and emerging technologies and trends (artificial intelligence (AI), containerization, the Internet of Things (IoT), big data, cloud, shift-left, continuous delivery, CX/UX, etc.) are included. Maybe there’s some truth in this statement. 


Solo.io Intros Gloo Mesh Enterprise 2.0

Introduced last year, Gloo Mesh Enterprise is an Istio-based Kubernetes-native solution for multicluster and multimesh service mesh management. New features in 2.0 such as multitenant workspaces enable users to set fine-grained access control and editing permissions based on roles for shared infrastructure, enabling teams to collaborate in large environments. Users can manage traffic, establish workspace dependencies, define cluster namespaces, and control destinations directly in the UI. And the policies can be re-used and adapted using labels. Gloo Mesh Enterprise 2.0 also features a new Gloo Mesh API for Istio management enables developers to configure rules and policies for both north-south traffic and east-west traffic from a single, unified API. The new API also simplifies the process of expanding from a single cluster to dozens or hundreds of clusters. And the new Gloo Mesh UI for observability provides service topology graphs that highlight network traffic, latency, and speeds while automatically saving the new state when you move clusters or nodes. 


Introducing Community Security Analytics

You can use CSA to further investigate high-fidelity security findings from Security Command Center (SCC) and correlate them with logs for decision-making. For example, you may use a CSA query to get the list of admin activity performed by a newly created service account key flagged by Security Command Center in order to validate any malicious activity. It’s important to note that the detection queries provided by CSA will be self-managed and you may need to tune to minimize alert noise. If you’re looking for managed and advanced detections, take a look at SCC Premium’s growing threat detection suite which provides a list of regularly-updated managed detectors designed to identify threats within your systems in near real-time. CSA is not meant to be a comprehensive, managed set of threat detections, but a collection of community-contributed sample analytics to give examples of essential detective controls, based on cloud techniques. Use CSA in conjunction with our threat detection and response capabilities in conjunction with our threat prevention capabilities.


µTransfer: A technique for hyperparameter tuning of enormous neural networks

Our theory of scaling enables a procedure to transfer training hyperparameters across model sizes. If, as discussed above, µP networks of different widths share similar training dynamics, they likely also share similar optimal hyperparameters. Consequently, we can simply apply the optimal hyperparameters of a small model directly onto a scaled-up version. We call this practical procedure µTransfer. If our hypothesis is correct, the training loss-hyperparameter curves for µP models of different widths would share a similar minimum. Conversely, our reasoning suggests that no scaling rule of initialization and learning rate other than µP can achieve the same result. This is supported by the animation below. Here, we vary the parameterization by interpolating the initialization scaling and the learning rate scaling between PyTorch default and µP. As shown, µP is the only parameterization that preserves the optimal learning rate across width, achieves the best performance for the model with width 213 = 8192, and where wider models always do better for a given learning rate—that is, graphically, the curves don’t intersect.


Will Transformers Take Over Artificial Intelligence?

Transformers quickly became the front-runner for applications like word recognition that focus on analyzing and predicting text. It led to a wave of tools, like OpenAI’s Generative Pre-trained Transformer 3 (GPT-3), which trains on hundreds of billions of words and generates consistent new text to an unsettling degree. The success of transformers prompted the AI crowd to ask what else they could do. The answer is unfolding now, as researchers report that transformers are proving surprisingly versatile. In some vision tasks, like image classification, neural nets that use transformers have become faster and more accurate than those that don’t. Emerging work in other AI areas — like processing multiple kinds of input at once, or planning tasks — suggests transformers can handle even more. “Transformers seem to really be quite transformational across many problems in machine learning, including computer vision,” said Vladimir Haltakov, who works on computer vision related to self-driving cars at BMW in Munich. Just 10 years ago, disparate subfields of AI had little to say to each other. But the arrival of transformers suggests the possibility of a convergence.


The Questionable Ethics Of Bitcoin ESG Junk Science

In February 2022, an op-ed, titled “Revisiting Bitcoin’s Carbon Footprint,” was published in the scientific journal “Joule,” authored by four researchers: Alex de Vries, Ulrich Gallersdörfer, Lena Klaaßen and Christian Stoll. Their written commentary, which admits limitations in their estimates, states that as bitcoin miners migrated from China to Kazakhstan and the United States in 2021, the network’s carbon footprint increased to 0.19% of global emissions. What went unnoticed by the media was that the researchers have professional motives to overstate Bitcoin’s relatively tiny environmental impact. The op-ed’s lead author, Alex de Vries, failed to disclose that he is employed by De Nederlandsche Bank (DNB), the Dutch central bank. Central banks are no fans of open, global payment rails, which bypass monopolistic government settlement layers. De Vries first released his “Bitcoin Energy Consumption Index” in November 2016, which coincides with his first round of employment with DNB, giving the appearance that DNB encouraged his critique of Bitcoin’s energy consumption. 


DBaaS and the Enterprise

From a DBA perspective (and being a former DBA myself), I always enjoyed working on more challenging issues. Mundane operations like launching servers and setting up backups make for a less-than-exciting daily work experience. When managing large fleets, these operations make up the majority of the work. As applications grow more complex and data sets grow rapidly, it is much more interesting to work with the application teams to design and optimize the data tier. Query tuning, schema design, and workflow analysis are much more interesting (and often beneficial) when compared to the basic setup. DBAs are often skilled at quickly identifying issues and understanding design issues before they become problems. When an enterprise adopts a DBaaS model, this can free up the DBAs to work on more complex problems. They are also able to better engage and understand the applications they are supporting. A common comment I get when discussing complex tickets with clients is: “well, I have no idea what the application is doing, but we have an issue with XYZ”.


How to Develop Strategies that Close the Leadership Gap with the Generation Gap

The leadership gap that has been forecasted for the past several years is upon us. And, it could not have come at a worse time with the Covid-19 pandemic still underway, impacting each of the multiple generations in the workforce differently. Many companies are unable to keep pace with their need to fill leadership openings created by Baby Boomers taking retirement and by companies expanding, in some cases at rapid rates. Their pipelines are not sufficient to fill the increasing number of leadership openings promptly. Companies that lack a focused strategy and drive to close this gap might very well find themselves struggling to stay in business and maintain their market share. The significant numbers of Baby Boomers taking retirement for the past ten years have only exacerbated the leadership gap. Many of them are leaving their leadership roles for their well-earned leisure lifestyle. In the third quarter of 2020, the number of Boomers who retired increased by over three million from the same quarter in 2019. 


How Digital Transformation is Rebuilding the Construction Industry

As construction companies continue to comply with pandemic restrictions, technology has been essential to the implementation of health and safety measures. For instance, firms can use wearables and AI sensors to detect when workers are not maintaining proper physical distance. Some construction projects are even using contact tracing devices that alert employees when there are too many personnel at a worksite; these can identify potentially infected individuals in the event of a confirmed COVID-19 case. These measures not only prioritize employee safety, but also help companies avoid entire site shutdowns. Even remotely, technology is a vital asset to construction firms. With fewer personnel allowed on-site, companies can rely on new cloud-based video platforms to assist with site monitoring. In the city of Miami, virtual inspections of construction sites through either a Zoom or a Microsoft Teams video call are now routine between engineers on site and building control officials. With usage tripling in 2020 alone, drones are also being used more frequently to improve mapping and surveying processes.


It’s not a Great Resignation–it’s a Great Rethink

Leaders often regard purpose in a limited way as either a marketing or human resources exercise. Companies that go deepest with purpose take a much more comprehensive approach, treating purpose as an operating system and embedding it in processes, organizational structures, and culture. Global professional services firm EY adopted a system of metrics to spur behaviors associated with its purpose. “Companies really have to be able to show what they’re doing,” EY’s CEO Carmine Di Sibio told me. “They get into trouble when they talk a lot about purpose and it’s just talk.” Imagine what it feels like when everything about your work ties back in clear, even obvious ways to your purpose. That’s what employees at deep-purpose companies experience on the job. It’s encouraging that some CEOs—68% of those queried in one survey—are placing “more emphasis” on purpose, but that’s not enough. For purpose to feel genuine and meaningful, they must live it in their daily work, hold others accountable for acting in ways congruent with that purpose, and bring it alive for their workforce.



Quote for the day:

"The essence of leadership is the willingness to make the tough decisions. Prepared to be lonely." -- Colin Powell

Daily Tech Digest - March 11, 2022

The secrets of successful cloud-first strategies

“Cloud-native is much more than just technology,” Rubina says. Companies need to take a fundamental shift in mindset away from traditional waterfall development toward more agile development principles such as the DevOps model, and automation. “Cloud-native must be a strategic approach; it must be driven by top management as it is a response to a wide range of business needs,” Rubina says. “And these need to be well defined and rolled out by senior management. It is about changes in the business model, about entering new markets, about the ability to adapt quickly to create innovative products and services and drastically reduce time to market.” ... “Determine if you’ll be using a fixed-cost structure flexible for the cloud,” Hon says. “Are you leveraging showback or chargeback to the business? And keep in mind seasonality. You want to have an idea of how often you scale up and shrink down and what that looks like. Building out cost models is key for how you can build a budget.” With a traditional data center, companies buy and install hardware with workload peaks in mind, Hon says. 


Moving a Legacy Data Warehouse to the Cloud? SAP’s Answer

As currently designed, SAP BW Bridge is primarily intended as a data acquisition and staging layer — which has been the mainstay for BW. Other capabilities, such as planning, are outside the scope of BW Bridge; instead, SAP directs BW customers to SAP Analytics Cloud (SAC) Planning, which for now is a separate offering; later this year, SAP will add support SAC planning on top of DWC based data models. The scenarios for SAP BW Bridge are, not surprisingly, centered around bringing SAP legacy data sources into the modern cloud data warehousing world. It could include those that want to replicate their existing BW environment and operational reports but in a managed cloud environment. And of course, it could involve migrating more modern BW/4HANA on-premises as well. And, as noted above, with cross-space sharing, BW data could be shared with other greenfield analytics, and mixed with data from other sources, developed in DWC. SAP BW Bridge is an acknowledgment that, for classic or legacy systems to make it into the cloud, cloud SaaS providers have to provide more flexibility to accommodate the types of customizations that permeate legacy systems.


Database technology evolves to combine machine learning and data storage

The new model offers not just the potential for tapping the power of AI algorithms, but also a more flexible search engine that isn’t locked into searching for exact matches. While traditional databases require the names to be spelled correctly or the exact confirmation code to locate a record, Weaviate can find entries that are the most similar. What does it mean to be similar? That’s still a wide open question for many users. Much of the art goes into defining how to calculate just how close or far apart two pieces of data might be. Finding the closest records in the database begins with finding a metric or a way to specify just what it means to be nearby in some multidimensional space defined by an AI. While SeMI Technologies is the main fundraiser, much of the branding is focused on Weaviate, the open source database. Companies can download the code or purchase Weaviate as a managed service. Many Weaviate users rely on pre-built models for text in English and other well-known languages. There’s one model built out of the entire collection of Wikipedia articles that SeMI built, so people could experiment. 


Using the Problem Reframing Method to Build Innovative Solutions

First, reframing is not analysis. It is not about finding the root cause analysis and asking “Why is this problem existing?” Reframing starts before that, when you ask “What problem are we trying to solve?” and “Is this the right problem to solve?” Reframing is about looking at the big picture and thinking of the problem from different angles. Second, reframing is not about finding the real problem but finding a better problem to solve. The advantages of reframing a problem are generating more options, opening the problem space (diverge) and in the end, building better solutions by solving a better problem. Let’s look into a simple example that Thomas Wedell-Wedellsborg explained in his book “What’s your problem?” Imagine you are the owner of an office building, and your tenants are complaining about the elevator. It’s too slow, and they have to wait a lot. Several tenants are threatening to break their leases if you don’t fix the problem. When asked, most people directly jump into thinking of solutions: install a new lift, upgrade the motor, or perhaps improve the algorithm that runs the lift. 


Code Verify: An open source browser extension for verifying code authenticity on the web

Code Verify expands on the concept of subresource integrity, a security feature that lets web browsers verify that the resources they fetch haven’t been manipulated. Subresource integrity applies only to single files, but Code Verify checks the resources on the entire webpage. To do this at scale, and to enhance trust in the process, Code Verify partners with Cloudflare to act as a trusted third party. We’ve given Cloudflare a cryptographic hash source of truth for WhatsApp Web’s JavaScript code. When someone uses Code Verify, the extension automatically compares the code that runs on WhatsApp Web against the version of the code verified by WhatsApp and published on Cloudflare. If there are any inconsistencies, Code Verify will notify the user. While comparing hashes to detect files that have been tampered with is not new, Code Verify does so automatically, with the help of Cloudflare’s third-party verification, and at this scale for the first time. WhatsApp’s security protections, the Code Verify extension, and Cloudflare all work together to provide real-time code verification. 


Chainlink for Enterprises: The Gateway to All Blockchains

Chainlink is secure, open-source blockchain middleware (referred to as an “oracle”) that provides smart contracts with any type of data or computation that they cannot inherently obtain on their native blockchain due to technical, financial, governance, or legal constraints. Unlike blockchains, which maintain internal consistency around transaction validation, Chainlink aims to generate and deliver oracle reports to blockchains that accurately reflect the state of external events and computation. Chainlink oracles are able to generate oracle reports because the Chainlink oracle node software can read data from and write data to blockchains and APIs and perform off-chain computation. Chainlink generates trust-minimization for oracle reports through mechanisms similar to those used by blockchains, such as decentralized validation, cryptographic signatures, and financial/reputational incentives outlined in service level agreements (SLAs). 


A decade of IoT: 10 years forward and 10 years back

As with many markets, the winners are often those that have specialisms, such as CSL’s critical connectivity or Peoplesafe’s personal safety solutions, or those that have scale, such as Wireless Logic. Businesses can achieve success in different ways but ultimately they need to provide a solution that the market needs, along with executing their vision efficiently on the back of strong market tailwinds. However, with the half-life of business models shortening, especially in fast-growth technology, great management teams must continue to evolve to ensure they maintain their competitive advantage. This evolution could mean adding new services, more security, or providing an improved service wrapper for the customer. Moreover, another key differentiator might be the route to market. For example, at ECI’s former investment, Arkessa, it understood there was a significant opportunity to introduce IoT connectivity at the point of manufacture rather than in the aftermarket. There has already been some consolidation of the market as companies aim to dominate certain verticals within the IoT market or scale-up.


Low-Code Tools Optimize Engineering Time for Internal Applications

A large part of application development in today’s world involves a lot of application management. Low-code tools are getting widespread adoption because, in a world where front-end applications are becoming more complex, they make things like hosting, deployment, authentication, and workflow functions much simpler. As compared to custom code, where you have to spin up a server and set up CI/CD pipelines to deploy to your cluster, low-code takes care of this with the click of a button, saving developers time because they don’t have to spend time configuring and maintaining a custom deployment. Similarly, authentication and authorization is a complex process to get right because there are different access controls to be thought of, invite flows need to be addressed, and onboarding and offboarding for team members need to be taken care of. Low-code products do the heavy lifting and have these features built in, allowing developers to define authentication and authorization in a much simpler manner.


IT leadership: 5 ways you could be wasting your team's talent

Today’s IT specialists no longer need a four-year degree in computer science or engineering to be able to do their job. Between the many boot camps and online training resources, it’s entirely possible for new developers to train themselves and gain the skills necessary to provide value to an IT organization. Limiting your talent search to those with outdated credentials could cause you to miss out on golden opportunities to build out your team. Instead, focus on skills using unbiased assessments that can determine proficiency without knowing a candidate’s educational background or past experience. Maximizing the potential of your IT employees provides a dual benefit: not only does it help your enterprise to avoid the crunch of a talent shortage, but it also ensures that your employees will be more satisfied with their jobs and career advancement. Tech employees highly value logic and efficiency, and by demonstrating that your organization makes the most out of its teams and technologies, you’ll gain another key selling point as you compete for new talent in a highly contested job market.


Cloud security is too important to leave to cloud providers

The need to take control of security and not turn ultimate responsibility over to cloud providers is taking hold among many enterprises, an industry survey suggests. The Cloud Security Alliance, which released its survey of 241 industry experts, identified an "Egregious 11" cloud security issues. The survey's authors point out that many of this year's most pressing issues put the onus of security on end user companies, versus relying on service providers. "We noticed a drop in ranking of traditional cloud security issues under the responsibility of cloud service providers. Concerns such as denial of service, shared technology vulnerabilities, and CSP data loss and system vulnerabilities -- which all featured in the previous 'Treacherous 12' -- were now rated so low they have been excluded in this report. These omissions suggest that traditional security issues under the responsibility of the CSP seem to be less of a concern. Instead, we're seeing more of a need to address security issues that are situated higher up the technology stack that are the result of senior management decisions."



Quote for the day:

"A lot of people have gone farther than they thought they could because someone else thought they could." -- Zig Zigler

Daily Tech Digest - March 10, 2022

Sharp rise in SMB cyberattacks by Russia and China

Over the last several weeks, there has been a sharp rise in activity from countries with consistently high levels of both attempted and successful attacks originating within their borders — Russia and China. The vast volumes of data analyzed suggests these countries may even be coordinating attack efforts. Per analysis available, attack trend lines that compare Russia and China show almost the exact same pattern. Juxtaposed to a chart from Germany indicates that it is not even close to the same pattern, leading to educated speculation that these countries could be coordinating efforts. According to the Brookings Institute, “The U.S. National Security Strategy declares Russia and China the two top threats to U.S. national security. At the best of times, U.S.-Russia ties are a mixture of cooperation and competition, but today they are largely adversarial… Russia’s increasingly close relationship with China represents an ongoing challenge for the United States. While there is little that Washington can do to draw Moscow away from Beijing, it should not pursue policies that drive the two countries closer together, such as the trade war with China and rafts of sanctions against Russia.”


Threat intelligence: why it matters, and what best practice looks like

While no two organisations are the same, one useful way to think about deploying threat intelligence is to focus on three stages: monitoring, integration and analysis. In the early days of a project threat intelligence strategy, it’s unlikely that you’ll have the relevant expertise, time, or resources that are necessary to support proactive intelligence analysis yet. However, by collecting information from various sources and monitoring them for threat indicators relevant to your business, it’s possible to drive significant value. This could include things like leaked corporate credentials, mentions of your product on the dark web or looking for typosquats of your corporate brands in domain name registrations that are important as you begin your journey. The intelligence gained from doing so could help to inform the IT department for password resets, phishing email campaigns targeting employees and accelerate efforts to verify potential security incident efforts. Next comes integration. 


A Proposal For Type Syntax in JavaScript

When we’ve been asked "when are types coming to JavaScript?", we’ve had to hesitate to answer. Historically, the problem was that if you asked developers what they had in mind for types in JavaScript, you’d get many different answers. Some felt that types should be totally ignored, while others felt like they should have some meaning – possibly that they should enforce some sort of runtime validation, or that they should be introspectable, or that they should act as hints to the engine for optimization, and more! But in the last few years we’ve seen people converge more towards a design that works well with the direction TypeScript has moved towards – that types are totally ignored and erasable syntax at runtime. This convergence, alongside the broad use of TypeScript, made us feel more confident when several JavaScript and TypeScript developers outside of our core team approached us once more about a proposal called "types as comments". The idea of this proposal is that JavaScript could carve out a set of syntax for types that engines would entirely ignore, but which tools like TypeScript, Flow, and others could use.


Have smart wearables increased productivity of employees in the hybrid working environment?

Smartwatches offer myriads of features that help individuals take charge of their daily tasks and complete them quicker and with ease. From using the voice commands to dictate emails to sending short messages or to track their physical movements, water intake, SpO2, heart rate, stress, breathing exercises, stretching, etc., these devices have enabled us to tirelessly complete tasks without compromising on fitness and health. SpO2 has emerged as an important measure for fitness over the last two years. It is satisfying to keep a check on it from time to time just in case any medical assistance is required. On the other hand, earbuds let you answer calls hands free, which makes it easier to make notes or go on with other tasks, thereby boosting productivity. Features like ANC and ENC take care of the background noise to further enhance the quality of audio experience. And in case, you’re out running an errand during office hours, and forget a crucial meeting that was scheduled, your smartwatch will notify you. You can also pick up the call via your earbuds while you drive back home, and it is really happening out there.


Best Practices for Running Stateful Applications on Kubernetes

A common approach is to run your stateful application in a VM or bare metal machine, and have resources in your Kubernetes cluster communicate with it. The stateful application becomes an external integration from the perspective of pods in your cluster. The upside of this approach is that it allows you to run existing stateful applications as is, with no refactoring or re-architecture. If the application is able to scale up to meet the workloads required by the Kubernetes cluster, you do not need Kubernetes’ fancy auto scaling and provisioning mechanisms. The downside is that by maintaining a non-Kubernetes resource outside your cluster, you need to have a way of monitoring processes, performing configuration management, performing load balancing and service discovery for that application. ... A second, equally common approach is to run stateful applications as a managed cloud service. For example, if you need to run a SQL database with a containerized application, and you are running in AWS, you can use Amazon’s Relational Database Service (RDS). 


3 DevSecOps Practices to Minimize Impact of the Next Log4Shell

Security is tough to get right, and it’s made more difficult by market pressures, cloud complexity and the growing prevalence of open source libraries. This has expanded the typical enterprise’s cyberattack surface to many times its size of several years ago. It has also provided more opportunities for potentially critical vulnerabilities to enter the development cycle and then persist into production. Log4Shell is the poster child for that problem. As a result, it’s more important than ever that we pay more than lip service to the concept of security as a shared responsibility within the organization. “Shared responsibility” is often used to mean greater boardroom buy-in, or in the context of behavioral change among staff, but it’s just as important in IT departments. We need developers to become more skilled in building secure products, but we also need to ensure apps in production continue running securely. Breaking down the silos between developers, operations and security teams will drive true DevSecOps practices. To get there, organizations should unify teams around a centralized platform that gives them visibility and control.


Forrester predicts RPA software market growth will begin to flatten next year

Forrester is predicting that some of the money going to RPA software today will begin to shift to broader AI automation solutions. It’s worth noting that while RPA has robotic in its name, it’s not really AI in a true sense. The bots in this case are more like scripts completing a set of highly manual tasks. By comparison, no-code automation solutions make it easy to create a workflow, presumably without consulting help. AI provides a way to intelligently implement tasks and take steps based on the data instead of moving through a set of highly defined hard-coded work. This decline is coming in spite of investor enthusiasm for the market from investors who valued UiPath at $35 billion when it raised $750 million last year, its last private fundraise prior to its IPO. Today the company’s market cap sits at close to $15 billion, certainly a precipitous drop in value, even taking into consideration the big hit software companies have been taking in the stock market over the last year. Meanwhile, we also saw some pretty significant consolidation as companies like SAP bought Signavio, ServiceNow acquired Intellibot and Salesforce snagged Servicetrace, as several examples.


The rise of confidential blockchains

Cryptoeconomics has long been founded upon the proof-of-work consensus algorithm. This algorithm has proven to be truly resilient to Byzantine attacks. But there are downsides. First, the performance of proof-of-work blockchains remains poor. Bitcoin, for example, still operates at seven transactions per second. Second, proof-of-work blockchains are also extremely energy-intensive. Today, the process of creating Bitcoin consumes around 91 terawatt-hours of electricity annually. This is more energy than is used by Finland, a nation of about 5.5 million people. While, there is a section of commentators that consider this to be a necessary cost of protecting the global cryptocurrency system, rather than just the cost of running a digital payment system. There is another section that thinks that this cost could be done away with by developing proof-of-stake consensus protocols, as they deliver much higher throughput of transactions. Indeed, the proof-of-stake blockchains built on the Tendermint framework deliver upwards of 10,000 transactions per second. However, proof-of-stake blockchains also have some downsides.


Teaming is hard because you’re probably not really on a team

Real teams are all about solving the hardest, most complex problems. A diverse set of perspectives and skills is required to untangle these sorts of problems, for which there is no obvious solution. Members of a real team trust each other and work toward a common goal. Real teams are thoughtful, they argue, and they push each other to do better. They require nimble leaders who prioritize building connections within the team. They create clear boundaries that reinforce a strong sense of trust. They have a shared purpose and clear norms. And, importantly, they produce a collective output. If you see a group of people focusing intently on solving a single, very complex problem, you’re probably looking at a real team. Working groups are all about efficiency. Most people spend most of their productive time in working groups. We’ll say it again: there is nothing wrong with being in a working group. In fact, working groups are often best suited to the tasks at hand. Managers of working groups focus heavily on techniques to make their collaboration more efficient. 


How machine learning can course-correct inherent biases in recruiting

Often, if the job opening is attractive, there may be hundreds of people applying for a single position. Toward the end of the hiring process, all of the candidates are more than good enough to do the job but they don’t make the final cut. How hiring managers decide between them is often on minute mistakes. These are an underutilised resource for HR teams when recruiting. These candidates have already proven themselves, but historically there hasn’t been an easy way to match them with other companies who would likely hire them based on their performance. Joonko has developed a platform that is made up entirely of silver medalists, pre-qualified candidates who have passed at least two stages of the recruiting process, and match these candidates with future jobs, thus saving significant time in the recruiting process. ... “Silver medalists were already vetted by their peers, and the conversation with the candidates could be more around the specific needs of the organisation, without the excruciating part of the interview process.”



Quote for the day:

"Leaders need to strike a balance between action and patience." -- Doug Smith

Daily Tech Digest - March 09, 2022

Small Biz Takes Digital Highway

Data protection and cybersecurity issues will also take centre stage once small and medium businesses adopt digitisation at a larger scale. He says cyber attacks and complying with laws and policies will require companies to build mechanisms which entail considerable, if not hefty, costs. In fact, small and medium businesses seem to be already bearing the brunt of cyber attacks. A study by Cisco published in September 2021 that sampled about 1,014 local businesses showed that about 74% small and medium businesses had faced a cyber incident in past 12 months. “At the end of the day, digital is here to stay, and nobody can ignore that. I am sure the service segment will evolve to offer solutions at affordable prices,” says Subbiah. Costs, after all, are a big factor for smaller companies, though companies are more than willing to spend on adding technology capabilities due to high return on investment. ... The bulk of the investment goes into cloud, automation and modern infrastructure, say analysts. “Specifically, within cloud, SaaS adoption is seeing acceleration as it entails lower costs and entry barriers,” says Abhinav Johri, director and practice head, digital consulting, EY.


2.5 million-plus cybersecurity jobs are open—women can fill them

Encouraging and nurturing the careers of women in cybersecurity is important for a number of reasons: Our cyber adversaries come from diverse backgrounds, which means that our defender community must be equally diverse in order to understand and succeed against them. We are facing a massive talent shortage in cybersecurity of more than 2.5 million job openings. This is putting a strain on security teams and organizations of every size. We can vastly decrease the deficit by deliberately expanding our hiring and mentorship of underrepresented groups who can bring so much to the table. Innovation is everything! And what’s more conducive to innovation than bringing together new perspectives, ideas, and experiences to solve today’s challenges? Cybersecurity depends on it because cybercrime tactics keep evolving. In fact, an MIT Technology Review article referred to cybersecurity versus cybercrime as “an innovation war.” Studies show that diversity of thought and leadership is just good for business.


IoT comes of age

Cities are near and dear to my heart as a former municipal CIO [chief information officer]. One of the challenges that we’ve seen in a number of large cities around the world is the amount of traffic congestion in the center of cities. A number of different cities have applied congestion pricing. They are tracking when vehicles are in the center of the city and charging for the times when congestion is highest. That doesn’t necessarily make the driver happy, but we have seen material changes in traffic patterns within those cities that have invested in congestion pricing. ... What we saw happen all too often was IoT being treated as a technology project, often run by the CIO or by a small business unit or factory plant all by themselves. And so the technology has changed, but the actual way of work has not. When we look at some of the lighthouse factories that Michael referenced earlier from the World Economic Forum, we see that they treat the integration of IoT as a holistic operating model transformation. When they look at how systems and processes are going to change on the factory floor, for example, they think about how they may need to motivate individuals working within that system differently. 


Critical flaws in remote management agent impacts thousands of medical devices

Forescout has identified over 150 potentially vulnerable devices using Axeda from over 100 different manufacturers. Over half of the devices are used in healthcare, specifically lab equipment, surgical equipment, infusion, radiotherapy, imaging and more. Others were found in the financial services, retail, manufacturing and other industries and include ATMs, vending machines, cash management systems, label printers, barcode scanning systems, SCADA systems, asset monitoring and tracking solutions, IoT gateways and machines such as industrial cutters. The seven vulnerabilities, which Forescout has dubbed Access:7 include three critical ones that can result in remote code execution. One vulnerability (CVE-2022-25251) stems from unauthenticated commands present in the Axeda xGate.exe agent that allow an attacker to retrieve information about a device and change the agent's configuration. By changing the configuration, an attacker could point the agent to a server they control and hijack the functionality.


How to approach cloud compliance monitoring

One common strategy is to use the data collected by cloud and network monitoring tools to create a centralized view of compliance status across all these domains. This approach aligns well with current cloud and network monitoring practices. To start a cloud compliance monitoring strategy, divide the tasks identified above. Some are design-time considerations. Here, an application will meet or fall short of compliance standards based on how developers build it. Others are run-time considerations, meaning the application requires surveillance during operations to validate compliance. The specific tools and procedures an organization applies to its cloud applications depend on how compliance requirements map to these categories. Enforce design-time compliance standards into the development pipeline, and validate them through logging and version monitoring. The former requires a systematic way to initiate, execute, review, test and deploy cloud software. Teams must identify tools that enforce and document the requirements of each applicable standard. 


Ukraine Fighting First-Ever 'Hybrid War' - Cyber Official

Ukraine continues to fight not just on the ground and in the air, but also online. "This is happening for the first time in history and I believe that cyber war can only be ended with the end of conventional war, and we will do everything we can to bring this moment closer," SSSCIP's Zhora said at a Friday press conference, the BBC reported. Zhora said Ukrainian cyber defenders continue to repel attacks on the country's online services and infrastructure, and said that "they are not afraid of Russian" attacks focused on such critical infrastructure as power plants or nuclear facilities, the BBC reported. Internet access remains shaky across Ukraine, due in part to continued bombing, says Britain's Ministry of Defense. "Ukrainian internet access is … highly likely being disrupted as a result of collateral damage from Russian strikes on infrastructure," it says. "Over the past week, internet outages have been reported in Mariupol, Sumy, Kyiv and Kharkiv." ... "Russia is probably targeting Ukraine's communications infrastructure in order to reduce Ukrainian citizens' access to reliable news and information," it adds.


7 reasons to embrace Web3 — and 7 reasons not to

Just because Bitcoin wastes so much energy doesn’t mean that Web3 will need to do the same. There are many protocols that offer some genuine assurance of correctness without requiring a bazillion transistors to be constantly solving some mathematical puzzle. Proof of stake, for example, is a neutral, decentralized protocol. It may not be perfect, but maybe we can get by with an adequate consensus model for a number of parts of Web3? Many people might be just as happy with blockchain managed by a coalition of trusted parties. It may not be theoretically free of domination, but if the coalition is big enough and the process is open, it could be embraced at a much lower cost in energy, silicon, and time. ... Our society is increasingly driven by data. Anything we can do to increase the accuracy of the data will help everyone who uses the information to make decisions. One of the side effects of adding more robust digital signatures and protocols to every interaction means that there will be more structure. ... Web3 is bound to have more accurate information and that will lift every part of the web that depends upon it.


The Uncertain Future of IT Automation

As Automox predicted at the end of last year, IT and security transformation continue as organizations everywhere try to find a new normal following the disruptions of the pandemic, and IT automation will have to adjust. This has been challenging for many organizations — and more importantly, people, as discussed above — but there are silver linings too. The pandemic has pushed new innovation across many areas, with exciting new tools and practices on the horizon for IT and security teams. One innovation that is particularly interesting is cybersecurity mesh architectures. Gartner has claimed that “organizations adopting a cybersecurity mesh architecture will reduce the financial impact of security incidents by an average of 90 percent” by 2024. A cybersecurity mesh architecture leverages various parts of the enterprise to integrate widely distributed, disparate security services. This is key to managing and accounting for a workforce that has never been more remote and globally distributed.


Predicting the future of AI and analytics in endpoint security

What’s troubling about Unit 42’s findings for endpoints is that 40% of enterprises are still using spreadsheets to track digital certificates manually, and 57% of enterprises don’t have an accurate inventory of SSH keys. These two factors contribute to the widening gap in endpoint security that bad actors are highly skilled at exploiting. It’s common to find organizations that aren’t tracking up to 40% of their endpoints, according to a recent interview with Jim Wachhaus, attack surface protection evangelist at CyCognito. Jim told VentureBeat that it’s common to find organizations generating thousands of unknown endpoints a year. Supporting Jim’s findings are CISOs who tell VentureBeat that keeping track of every endpoint defies what can be done through manually-based processes today as their IT staffs are already stretched thin. Add to that how CIOs and CISOs are battling a chronic labor shortage as their best employees are offered 40% or more of their base salary and up to $10,000 signing bonuses to jump to a new company, and the severity of the situation becomes clear. In addition, 56% of executives say their cybersecurity analysts are overwhelmed, according to BCG.


New attack bypasses hardware defenses for Spectre flaw in Intel and ARM CPUs

To mitigate the risk, software vendors such as Google and the Linux kernel developers came up with software-based solutions such as retpoline. While these were effective, they introduced a significant performance hit, so CPU vendors later developed hardware-based defenses. Intel's is called EIBRS and ARM's is called CSV2. "These solutions are complex to learn more about them—but the gist of them is that the predictor 'somehow' keeps track of the privilege level (user/kernel) in which a target is executed," the VUSec researchers explain. "And, as you may expect, if the target belongs to a lower privilege level, kernel execution won’t use it." The problem, however, is that the CPU's predictor relies on a global history to select the target entries to speculatively execute and, as the VUSec researchers proved, this global history can be poisoned. In other words, while the original Spectre v2 allowed attackers to actually inject target code locations and then trick the kernel to execute that code, the new Spectre-BHI/BHB attack can only force the kernel to mispredict and execute interesting code gadgets or snippets that already exist in the history and were executed in the past, but which might leak data.



Quote for the day:

"A pat on the back is only a few vertebrae removed from a kick in the pants, but is miles ahead in results." -- W. Wilcox

Daily Tech Digest - March 08, 2022

Towards Artificial General Intelligence

Of course, there is no way a machine can feel and experience thoughts like a human, but it can compute and relate concepts, and encode human-like experience (e.g., a snake is dangerous and scary, therefore it must be avoided). So, what might be the solution in developing such relational networks, which could bring about a general form of AI called artificial general intelligence (AGI), which could 'think' like a human and in the way which was proposed in Dartmouth College in 1956? Simply more parameters in a neural network? Recent work conducted in my own lab with colleagues in Belgium has suggested that a new approach of functional contextualism (which differs from current forms of cognitivism — e.g., of memory, attention, and reasoning through logic) may be the solution to progress AI into the generalized form of AGI, where the system learns and understands concepts and how these relate to other concepts (through something called relational frames), and the context in which cues within the environment influence functions and the meaning or uses of such concepts. 


The Shape of Things to Come: GraphQL and the Web of APIs

The inflection point for GraphQL, however, is still a ways off. While there are some major companies using GraphQL, such as Shopify, REST is still the most-used API format in many other companies — including prominent API-based public companies Stripe and Twilio. I asked Jhingran whether he sees those types of companies pivoting to GraphQL over time? He first noted that he doesn’t see GraphQL usurping REST. “We typically find that the REST layer that enterprises have built, [they] have embedded business logic into it. And the GraphQL there, in general, will be a composition layer — as opposed to incorporating deep business logic. And therefore, both will actually co-exist.” However, Jhingran does think that more and more companies will start using GraphQL for their external services, a trend that will happen in stages. “Backend teams are becoming comfortable with GraphQL for the apps that are built by the team,” he said, meaning applications developed internally. Backend developers will take more time to get comfortable using GraphQL APIs from third-party companies, although Jhingran pointed to GitHub and Shopify’s GraphQL APIs as early examples.


Stanford cryptography researchers are building Espresso, a privacy-focused blockchain

Espresso Systems, the company behind the blockchain project, is led by Fisch, chief operating officer Charles Lu and chief scientist Benedikt Bünz, collaborators at Stanford who have each worked on other high-profile web3 projects, including the anonymity-focused Monero blockchain and BitTorrent co-founder Bram Cohen’s Chia. They’ve teamed up with chief strategy officer Jill Gunter, a former crypto investor at Slow Ventures who is the fourth Espresso Systems co-founder, to take their blockchain and associated products to market. To achieve greater throughput, Espresso uses ZK-Rollups, a solution based on zero-knowledge proofs that allow transactions to be processed off-chain. ZK-Rollups consolidate multiple transactions into a single, easily verifiable proof, thus reducing the bandwidth and computational load on the consensus protocol. The method has already gained popularity on the Ethereum blockchain through scaling solution providers like StarkWare and zkSync, according to Fisch. At the core of Espresso’s strategy, though, is a focus on privacy and decentralization. 


5 Tips on Managing a Remote-first Development Team

Working from home makes it harder to remain connected with team members and stakeholders as it reduces not only the frequency of our communication but also its quality. We tend to rely more on email and instant messaging, both purely written media (if you exclude the occasional GIF). How can we, as managers and leaders, ensure that we are fostering healthy and effective communication in our teams and with our stakeholders? Meet 1o1 often and effectively. Unfortunately, one of the first casualties of working remotely tends to be 1o1 meetings. Most managers, particularly early career ones, have difficulties leading good 1o1s. Engineers seem to be particularly averse to bad meetings, seeing them as distractions or as waste of time. This also applies to stakeholders, making addressing concerns, conveying important updates or solving project constraints more difficult. The importance of 1o1 meetings cannot be understated. In its famous Project Oxygen, Google found that managers with higher feedback scores also tended to have more frequent and higher quality 1o1 meetings with their teams.


Blockchain and GDPR (General Data Protection Regulation)

The most obvious method to sidestep the GDPR is simply not to put any individual data on the blockchain relating to any private citizen or resident of the EU. However, this drastically reduces the usefulness of blockchains for any public application, such as health record tracking, social media, reputation reporting systems associated with online sales, and identity systems such as an international passport. The GDPR does not specify if subsequent corrections to the data are acceptable, if the original incorrect data is still present in earlier blocks on the blockchain. ... A further possibility is to ensure that all private data stored on the blockchain is encrypted. In such a situation, the company responsible for data care can provide evidence of the deletion of the data by ensuring that the decryption key is destroyed. Another approach may be to shift the responsibility for protecting the private key to the individual whose data is being stored on the blockchain. 


IT talent: 3 tips to kickstart employee career development

Training typically focuses on hard skills, which is not surprising for a technical role, but it is critical to understand the importance of soft skill training for IT professionals. Soft skills are particularly essential amidst digital transformation efforts, which cannot be done in a silo and require strong communication among many departments. I recently had the opportunity to participate in an eye-opening leadership course on driving strategic growth, which reinforced for me how important it is to take the time to build and strengthen soft skills. This course ran over three months and consisted of lectures, formal learning, and time to practice. Ultimately, we returned to the larger group to share learnings from our practice time. This learning process reminded me how important it is to build learning programs around soft skills. While teaching soft skills can require time and effort, incorporating them into skills training can positively impact your IT employees’ experience and set them on a path of growth and development.


Indian govt kicks off the consultation process to build a fairness assessment framework for AI/ML systems

“We have been studying various aspects of AI/ ML where some standardisation or testing and certification framework could be established. Moreover, we have studied the works of various researchers where biases in various AI/ ML systems deployed by leading corporates and governments are deliberated. Biases in AI/ ML Systems are a real threat, and ensuring fairness in such applications is very important to build public trust in AI/ ML Systems. Accordingly, we have initiated discussions for evolving a framework for fairness certification of such systems,” said Avinash Agarwal, DDG, Telecommunication Engineering Centre. TEC aims to set up standard operating procedures (SOP) to assess the fairness of various AI/ ML systems and create a benchmark. Systems that conform to the specifications will be given a fairness certification, ensuring product credibility and public trust in AI/ ML. “To achieve this, we will follow a consultative process for framing standards, specifications and test schedules. Then, we plan to prepare a draft document based on the various inputs received and release it for public consultations.


SOARs vs. No-Code Security Automation: The Case for Both

This is not to say that you’re required to ditch your SOAR and replace it with a lightweight security automation platform like Torq. Many businesses that have dedicated cybersecurity teams may opt to continue to use their SOARs as the place where they detect and manage the most complex threats, such as active, targeted attacks by professional threat actors. But for managing more mundane risks — like blocking phishing emails, securing sensitive data or detecting malicious users — lightweight no-code security automation is a more practical solution. It’s much easier to deploy, and it empowers all stakeholders to support security operations, even at organizations that have minimal cybersecurity resources. By extension, no-code security automation is the key to thriving in the face of today’s pervasive threats. When you operate in a world that sees 26,000 DDoS attacks and 4,000 ransomware attacks each day, and where threat actors are constantly probing your systems for an open door, you need more agility and automated remediation than a SOAR alone can deliver.


Closing the data quality gap

Data agility has become a central pillar of building a supple business, one that can move, and pivot quickly as new information arises. In the simplest terms, data agility is the distance between the data that informs a decision and the decision itself. This means that poor quality data will lead to poor decision making. Pairing trustworthy contact data with an agile data management programme enables organisations to make their data actionable, allowing for better and faster decisions when pursuing new and existing opportunities. That’s why 94% of business leaders believe having agility in both business and data practices is important in responding to the pandemic. Achieving greater agility requires business leaders to rethink their use of technology and be more open to integrating it into their businesses. Half the human brain is devoted to processing visual images and processes data at 60 bits per second. That might go some way to explain why four out of 10 leaders say they are looking for easy-to-use solutions; in turn this helps enable data and business users alike to visualise, read, write, and argue with data insights.


Digesting Blockchain

The servers are located in different computers around the network, therefore being distributed (peer-to-peer or P2P). Once a transaction is made, the new information is replicated and received by the nodes within the P2P network and added to the corresponding block open at that time. The block will contain the transaction information and each transaction will be assigned a "hash" once it has been validated by the network nodes —the cryptographic hash is like the digital fingerprint of the transaction and is represented by a sequence of numbers. A hash is a function that converts one value into another, and the latter contains a fixed amount of numbers or figures. The information about the transaction recorded in the block can include details regarding who, what, how, how much, or when the transaction happened. The information contained in the block can neither be changed nor its hash. Accordingly, If the information in a block is changed the whole sequence of blocks will become invalid.



Quote for the day:

"The test we must set for ourselves is not to march alone but to march in such a way that others will wish to join us." -- Hubert Humphrey

Daily Tech Digest - March 07, 2022

Graphcore Supercharges IPU With WoW Processor

Graphcore affirmed that its Bow IPU chip delivers 40% higher performance and 16% better power efficiency for real-world applications when compared to previous versions or models. Furthermore, the British semiconductor firm stated that its Bow Pod flagship products could deliver more than 89 petaFLOPS of AI computing. Additionally, the superscale Bow Pod can scale up to 350 petaFLOPS. Graphcore’s plan to develop an ultra-intelligence AI supercomputer is one of the fascinating announcements in the tech world. Graphcore emphasized how “approximately 100 billion neurons and more than 100 trillion parameters in a biological-neural-network system that delivers a level of computing yet to be matched by any silicon computers.” It also mentioned that it is developing an AI computer that will exceed the parametric capacity of the brain. The computer’s name is ‘Good,’ which is named after the pioneer of computer science, Jack Good. The pivotal achievements of Jack Good during the Second World War are worth reading. 


5 Best Practices For Code Review

You must seek advice or help from fellow developers as everyone’s contribution is equally important. Experienced ones can identify the mistakes within a second and rectify them but the young minds come up with more simple ways to implement a task. So, ask your juniors as they have the curiosity to learn more. To make it perfect, they find other ways which will benefit in two ways: a) They’ll get deeper knowledge; b) Solution can be more precise. ... A piece of code that does a single task that can be called whenever required. Avoid repetition of codes. Check if you’ve to repeat code for different tasks, again and again, so there you can use these functions to reduce the repeatability of code. This process of using functions maintains the codebase. For example, if you’re building a website. Several components are made in which basic functionalities are defined. If a block of code is being repeated so many times, copy that block of code or function to a file that can be invoked (reused) wherever and whenever required. This also reduces the complexity level and lengthiness of the codebase.


Rush to cloud computing is outpacing organizations' ability to adapt

Educating the business is a vital piece of an effective strategy. The Harvard Business Review report describes how Chegg, an educational technology and information publisher, has been rearchitecting its cloud approach over the past year to create smaller, more flexible cloud accounts for use by its engineering teams. "We've been in cloud for so long, we've learned a lot of what's working and what isn't working," John Heasman, chief information security officer, is quoted in the study. "We ended up in a position where we needed to take a step back and look at our architecture to align with best practices in cloud infrastructure and improve our processes overall." Heasman and his team concentrated on educating the company's leaders on the ways its cloud strategy will result in new services. "It's not just a case of saying, 'Here's a new account. It's yours,'" Heasman says. "It required a lot of planning to ensure the right level of oversight while still enabling our team to get the full benefit of cloud-native technology."


How Attack Surface Management Preempts Cyberattacks

ASM is a technology that either mines Internet datasets and certificate databases or emulates attackers running reconnaissance techniques. Both approaches aim at performing a comprehensive analysis of your organization's assets uncovered during the discovery process. Both approaches include scanning your domains, sub-domains, IPs, ports, shadow IT, etc., for internet-facing assets before analyzing them to detect vulnerabilities and security gaps. Advanced ASM includes actionable mitigation recommendations for each uncovered security gap, recommendations ranging from cleaning up unused and unnecessary assets to reduce the attack surface to warning individuals that their email address is readily available and might be leveraged for phishing attacks. ASM includes reporting on Open-Source Intelligence (OSINT) that could be used in a social engineering attack or a phishing campaign, such as personal information publicly available on social media or even on material such as videos, webinars, public speeches, and conferences.


How artificial intelligence is influencing the arms race in cybersecurity

There are two main ways A.I. is bolstering cybersecurity. First, A.I. can help automate many tasks that a human analyst would often handle manually. These include automatically detecting unknown workstations, servers, code repositories, and other hardware and software on a network. It can also determine how best to allocate security defenses. These are data-intensive tasks, and A.I. has the potential to sift through terabytes of data much more efficiently and effectively than a human could ever do. Second, A.I. can help detect patterns within large quantities of data that human analysts can’t see. For example, A.I. could detect the key linguistic patterns of hackers posting emerging threats on the dark web and alert analysts. More specifically, A.I.-enabled analytics can help discern the jargon and code words hackers develop to refer to their new tools, techniques, and procedures. One example is using the name Mirai to mean botnet. Hackers developed the term to hide the botnet topic from law enforcement and cyberthreat intelligence professionals.


Three reasons why your API security is failing

Many organisations have adopted DevOps practices, realising efficiencies in the development cycle. It’s natural that they would want to remove similar barriers with security. In the recent Salt Labs State of API Security report, 40% of respondents said developers or DevOps teams hold primary responsibility for securing APIs, but 95% of respondents experienced an API security incident in the past year, highlighting that the burden cannot fall solely on developer shoulders. Developers make applications work, but attackers make them perform in unintended ways. It’s difficult for developers to shift into an attackers mindset. Despite the methods available to identify potential vulnerabilities, it’s rare that all aspects of code are tested. Furthermore, as it is so difficult to keep up with today’s ultra-fast code, developers typically only test primary apps or specific areas of functionality and most scanning tools depend on best practices and signatures to identify vulnerabilities. Yet, these approaches are ineffective at identifying unique logic vulnerabilities.


Kremlin’s Aggression Divides Digital Ecosystems Along Tech Trenches

Sanctions to restrict international financial transactions and other commerce with Russia have already been put to work. Now that country faces the loss of certain technology services and resources as more tech companies seek to decouple themselves from the aggressor state. As the grim war on the ground wages on, new lines of demarcation emerge in response across the digital world. The future of greater connectivity may look drastically different than expected, says Raj Shah, head of tech, media, and telecoms for North America at digital consulting firm Publicis Sapient. The first globalization was supposed be a singular, interconnected world, he says, but China emerged as a challenger to the United States and other economically allied nation states. Now Russia’s actions may further fracture the dynamics of the digital landscape. “There does appear to be this fragmentation that’s going to start to happen,” Shah says. ... There may be some interchanges of information in buffer zones, he says, where some technology and commerce from opposing geopolitical spheres can intersect, but there will also be cordoned-off spaces.


Update: Samsung Confirms Source Code Stolen in Breach

The ransomware group released a teaser on its Telegram channel before posting the data saying, “get ready, Samsung data coming today.” Then the gang posted confidential Samsung source code data in a compressed file, available on torrent and split it into three parts, which includes almost 190GB of data. Lapsus$ published a description of the leak, which it says includes: source code for every Trusted Applet installed on all Samsung device's TrustZone with specific code for every type of TEE OS (QSEE, TEEGris, etc). Trusted Applets are used for sensitive operations such as full access control and encryption. The group says it also includes DRM modules and Keymaster/Gatekeeper. Algorithms for all biometric unlock operations include: “Source code that communicates directly with sensor (down to the lowest level), we're talking individual RX/TX bit streams here and boot loader source code for all recent Samsung devices, including Knox data and code for authentication," the gang says.


Three unusual questions to make job candidates think

A job interview is intended to be a kind of crystal ball, one that gives the employer a sense of what somebody is going to be like not only the first day but also after six months, when the honeymoon period is over and they’re facing a few challenges. I can’t think of a better question for ascertaining how an employee will behave over time. Of course, the thought of asking such a personal question may understandably make interviewers uncomfortable (although I’ve been assured by many HR professionals that the question is fair game). If so, simply focus on the positive by asking, “What qualities do you like the most in your parents?” Here’s the question that Clara Lippert Glenn, then CEO of the energy-industry training company Oxford Princeton Programme, told me she poses to candidates: “If you woke up tomorrow morning, and there were no humans left on the earth—just animals—what kind of animal are you?” I’ve used this question many times as an icebreaker with small groups, and added a second beat to the question, which is to ask people to explain why they chose that particular animal.


Agile transformation: 3 ways to achieve success

Too many organizations left much of their knowledge untapped due to outdated and bottleneck processes. In fact, 80 percent of undocumented human knowledge goes untapped. By mining every organizational knowledge resource, employees can work together beyond the scope of their own team, personal network, and organizational silos, eliminating remote worker disconnects, data gaps, and inaccurate or outdated information. Moreover, employees become empowered to build on their knowledge set, increase productivity, enjoy a greater stake in their work, and reach their potential without organizational hierarchies, geography, cultures, or languages impeding their work. The result is higher cultural inclusivity and increased organizational agility. Investing in tools, procedures, and cultural shifts that close the knowledge gaps, break down silos, and foster greater collaboration provides a return on investment in agility, retention, and productivity. Additionally, higher concentrations of knowledge collaboration unleash enormous amounts of tacit knowledge that would otherwise remain hidden and unutilized.



Quote for the day:

"Take time to deliberate; but when the time for action arrives, stop thinking and go in." -- Andrew Jackson