Daily Tech Digest - February 22, 2022

Partner Across Teams to Create a Cybersecurity Culture

Just because a software engineer doesn’t work on the security team doesn’t mean that security isn’t their responsibility. In addition to the standard security training, you can further empower your engineering teams by training and encouraging them to think like hackers. I was fortunate enough to work for a company some time ago that scheduled annual competitions with prizes and bragging rights. These competitions served as security training and engaged us in a series of engineering puzzles that included SQL injection, cross-site scripting (XSS), cryptography and social engineering. ... Even with well-implemented training programs and a dedicated cadre of security-minded engineers building your applications, there is still plenty for your security engineers to work on. The shared-responsibility model will reduce the risk of successful phishing attacks or other malicious activity, but it won’t remove it entirely. Ideally, security teams will move from a place where they are constantly fighting fires to one where they can engage in strategic initiatives to further improve security for the organization, automate risk detection wherever possible, and prepare your organization for the future.


Agile Doesn’t Work Without Psychological Safety

Soon after implementing agile, many organizations revert to the default position of worshiping at the altar of technical processes and tools, because cultural considerations seem abstract and difficult to operationalize. It’s easier to pay lip service to the human side and then move on to scrumming, sprinting, kanbaning, and kaizening because these processes serve as tangible, measurable, and observable indicators, giving the illusion of success and the appearance of developing agile at scale. Begin your agile transformation by framing agile as a cultural rather than a technical or mechanical implementation. In doing so, be careful not to approach culture as a workstream. A workstream is defined as the progressive completion of tasks required to finish a project. When we approach culture as a workstream within the context of agile, we classify it as something that can be completed. Culture cannot be completed. Yet I see agile teams attempting to project-manage it as part of the work breakdown structure, as if it has a beginning, middle, and end. It doesn’t.


Inside the U.K. lab that connects brains to quantum computers

While BCIs and quantum computers are undoubtedly promising technologies emerging at the same point in history, the question is why bring them together – which is exactly what the consortium of researchers from the U.K.’s University of Plymouth, Spain’s University of Valencia and University of Seville, Germany’s Kipu Quantum, and China’s Shanghai University are seeking to do. Technologists love nothing more than mashing together promising concepts or technologies in the belief that, when united, they will represent more than the sum of their parts. Sometimes this works gloriously. As the venture capitalist Andrew Chen describes in his book The Cold Start Problem, Instagram leveraged the emergence of camera-equipped smartphones and the simultaneous powerful network effects of social media to become one of the fastest-growing apps in history. Taking two must-have technologies and combining them doesn’t always work, though. Apple CEO Tim Cook once quipped that “you can converge a toaster and a refrigerator, but, you know, those things are probably not going to be pleasing to the user.”


Three ways COVID-19 is changing how banks adapt to digital technology

Bank leaders face the difficult task of balancing the traditional approach to risk management with the need to respond quickly to a crisis that has created massive changes to their operating environment. Criminal cyber activity, including fraud and phishing attacks, has increased as more employees work remotely. However, as one participant said: “We have not yet seen the massive increase in sophisticated, advance persistent threat cyber attacks that we normally associate with events like these.” As banks shift from crisis mode, their boards need to address new emerging risks, such as video and voice communication surveillance with everyone using Zoom and other platforms, data security controls for the use of personal equipment, and cases of third and fourth parties falling victim to cyber issues. ... As the economic impacts of the pandemic become clearer, banks are updating risk models and stress scenarios in an attempt to stay ahead of the curve. However, uncertainty in the operating environment continues to pose challenges. A lack of regulatory harmonization may further complicate benchmarking among peers across countries, though there is hope that this will improve soon.


The threat of quantum computing to security infrastructure

The report states:”The encryption technologies that are securing Canada’s financial systems today will one day become obsolete. If we do nothing, the financial data that underpins Canada’s economy will inevitably become more vulnerable to cyber criminals.” In the US, as noted above, the National Security Agency took an early lead in identifying the perceived threat. On January 19, 2022, an action from the US president came public. The White House issued a “Memorandum on Improving the Cybersecurity of National Security, Department of Defense and Intelligence Community Systems.” The document shows the urgency needed to address perceived major threats. It outlines major actions to avoid security lapses that would be created by quantum computers targeting critical secret data and related infrastructure. It also identifies the management responsibilities in the various agencies to implement these measure within a matter of months. This perceived threat to existing cybersecurity will generate a great deal of private industry and bring well-funded new companies into the business of transition to new security solutions.


AI fairness in banking: the next big issue in tech

“People want to be treated fairly by an agent whether artificial or not. The difference for a lot of applications is that people are not aware of the full extent of the decision making and the statistical regularities across a larger population where some of these issues can arise. There is a lot of cynicism around these decisions.” He adds that there are technical as well as organisational solutions that financial services providers need to apply. This, combined with policies of transparency about the processes in place all combine to provide an overall strategy. He adds: “The first thing is to have processes of regularly reporting on and examining and making corrections to data that is used to train models as well as to test them. “So, a simple test is representation of people that belong to legally protected categories by race, age, gender, ethnic origin and religious status to determine if there is enough data to represent each of these groups with accurate models. In addition, these is a need to determine whether there are other inputs to the model or features that could be corelated with these protected classes and have a potentially adverse or discriminatory impact on the output of the model.”


4 common misunderstandings about enterprise open source software

It might seem natural to download community-supported bits from the Internet rather than purchase an integrated product. This is especially the case when the community projects are relatively simple and self-contained or if you have reasons to develop independent expertise or do extensive customization. (Although working with a vendor to get needed changes into the upstream project is a possible alternative in the latter case.) However, if the software isn’t a differentiating capability for your business, hiring the right highly-skilled engineers is neither easy nor cheap. There’s also the ongoing support burden if your downloaded projects turn into a fork of the upstream community project. And if you don’t want them to, you’ll need to factor in the time to work in the upstream projects to get needed features added. There’s also a lot of complexity in categories like enterprise open source container platforms in the cloud-native space. Download Kubernetes? You’re just getting started. How about monitoring, distributed tracing, CI/CD, serverless, security scanning, and all the other features you’ll want in a complete platform? 


Leadership when the chips are down

Particularly noteworthy is the obsessive nature of Shackleton’s encounter with a territory so resistant to accurate perception. We risk bathos to say that the business landscape presents challenges on a par with the South Pole, yet the perceptual difficulties posed by Antarctica offer clear parallels for executives and entrepreneurs. The southernmost continent is unpredictable, unstable, and unforgiving. Compasses don’t behave normally. Much of what appears terra firma is actually floating ice, and deadly crevasses lurk under the snow. Snow blindness, a painful effect of the dazzling surroundings, can make vision itself impossible. ... Shackleton’s failings as a manager were manifest in his planning for the Heart of the Antarctic expedition. For a trip on foot of 1,720 miles to and from the Pole, his four-man unit brought food for just 91 days of hard labor, high altitude, and mind-numbing cold. His return instructions to the crew of the Nimrod, the ship that dropped off his party, were impossibly vague. 


How can banks remain relevant in the fastest growing digital market in the world?

While bolting on a digital banking system may be a quick fix for incumbents, the only way for FIs to truly keep up with the pace of change and future-proof their business is to invest in modern architecture which offers them the flexibility required to develop and deploy products and services at speed. Built with advanced customisation at their core, modern platforms enable FIs to approach product development with a different mindset to those struggling with legacy systems. As a result, FIs benefit from faster time-to-market, being able to scale up innovative digital operations, offer new products or services, and respond to ever-changing market requirements much faster. Shifting consumer behaviours, coupled with intensified competition, is making it increasingly difficult for banks in the APAC region to remain relevant. They are fighting not only to keep their loyal customer base, but stay ahead of the curve by offering customers the advanced digital services they require. Only by ensuring they have a comprehensive, future-proof system in place, underpinning their operations, will they truly be able to embrace the digital future.


Sustaining Agile Transformation – Our Experience

The organization needs to rethink and create a career roadmap for the Agile roles like Product Owner, Scrum Master, and Developers. The organization must build and enhance the self-paced learning experience, embed learning experience, develop role-based training, develop new learning areas, etc. For certain key roles, the organizations can focus on establishing academies such as Scrum Master Academy. This will ensure there is continuous learning and flow of trained Scrum Masters as and when needed. Coaching skills should be taught and embedded in Agile leaders and change agents. Ensure Leaders are trained and embrace foundational values and principles. Establishing and retaining a Central team such as a lean CoE will be very beneficial to oversee the transformation and support when needed. The organization can deliberate on the establishment of the CoE at divisional or organization levels. Collaborative forums like the CoPs, Guilds, Chapters, etc. should be established and run successfully. 



Quote for the day:

"Leaders must see the dream in their mind before they will accomplish the dream with their team." -- Orrin Woodward

Daily Tech Digest - February 21, 2022

What’s the buzz around AGI safety

Specification in AGI systems defines a system’s goal and makes sure it aligns with the human developer’s intentions and motives. These systems follow a pre-specified algorithm that allows them to learn from data, which helps them to achieve a specific goal. Meanwhile, both the learning algorithm and the goal are given by the human designer—for example, goals like minimising a prediction error or maximising a reward. During training, the system will try to complete the objective, irrespective of how it reflects on the designer’s intent. Hence, designers should take special care and clarify an objective that will lead to the desired or optimal behaviour. If the goal is a poor proxy for the intended behaviour, the system will learn the wrong behaviour and consider it as “misspecified.” This is a likely outcome where the specified goal does not align with the desired behaviour. In order to adhere to AGI safety, the system designer must understand why it behaves the way it does and will it ever align with that of the designer. A robust set of assurance techniques has already existed in old-gen systems. 


Google Cloud CISO Phil Venables On 8 Hot Cybersecurity Topics

“A different environment compared to my career in financial services — many things the same, but many things different, especially the scale of what we do and our ability to invest even more in security than even some of the largest banks are able to invest,” Venables said. Google integrated its risk, security, compliance and privacy teams from across the company into the Google Cybersecurity Action Team announced last October. The consolidated team will provide strategic security advisory services, trust and compliance support, customer and solutions engineering, and incident response capabilities. “Those were all teams that were doing really, really good stuff, but we thought it made sense for them to be part of one integrated organization for cloud given the importance of all four of those topics, making sure that we provide even more focus on those things together,” Venable said. “That’s working out very well, and I think that’s reflected in a lot of large organizations that are aligning their risk compliance, security and privacy teams because of a lot of the commonality between the types of controls that you have to implement to drive those things effectively.”


Real-Time Policy Enforcement with Governance as Code

Cloud governance as code encourages collaboration and promotes agility. Through this approach, development, operation, security and finance teams can gain visibility into policies, and they can collaborate more effectively on policy definition and enforcement. Teams can quickly and efficiently modify policies and create new policies, and changes can be implemented in much the same way teams modify application code or underlying infrastructure in today’s agile, DevOps environments. ... Governance as code is emerging as a foundational requirement for organizations scaling operations in the cloud. It champions automated management of the complex cloud ecosystem via a human-readable, declarative, high-level language. Infrastructure and security engineering teams can adopt governance as code to enforce policies in an agile, flexible and efficient manner while reducing developer friction. With governance as code, developers can avoid the obstacles that often hinder or discourage cloud adoption altogether, allowing for greater automation of and visibility into an organization’s cloud infrastructure, unifying teams in their greater mission to achieve success.


Leveraging machine learning to find security vulnerabilities

Code security vulnerabilities can allow malicious actors to manipulate software into behaving in unintended and harmful ways. The best way to prevent such attacks is to detect and fix vulnerable code before it can be exploited. GitHub’s code scanning capabilities leverage the CodeQL analysis engine to find security vulnerabilities in source code and surface alerts in pull requests – before the vulnerable code gets merged and released. To detect vulnerabilities in a repository, the CodeQL engine first builds a database that encodes a special relational representation of the code. On that database we can then execute a series of CodeQL queries, each of which is designed to find a particular type of security problem. Many vulnerabilities are caused by a single repeating pattern: untrusted user data is not sanitized and is subsequently accidentally used in an unsafe way. For example, SQL injection is caused by using untrusted user data in a SQL query, and cross-site scripting occurs as a result of untrusted user data being written to a web page.


AI Is Helping Scientists Explain the Brain

A raging debate that erupted recently in the field of decision-making highlights these difficulties. It started with controversial findings of a 2015 paper in Science that compared two models for how the brain makes decisions, specifically perceptual ones.3 Perceptual decisions involve the brain making judgments about what sensory information it receives: Is it red or green? Is it moving to the right or to the left? Simple decisions, but with big consequences if you are at a traffic stop. To study how the brain makes them, researchers have been recording the activity of groups of neurons in animals for decades. When the firing rate of neurons is plotted and averaged over trials, it gives the appearance of a gradually rising signal, “ramping up” to a decision. ... In the standard narrative based on an influential model that has been around since the 1990s, the ramp reflects the gradual accumulation of evidence by neurons. In other words, that is how neurons signal a decision: by increasing their firing rate as they collect evidence in favor of one choice or the other until they are satisfied.


What does the future of artificial intelligence look like within the life sciences?

The biggest hurdle for scientists is being able to more regularly adopt and implement the infrastructure and existing tools needed to run their lab using AI. This is especially true for open-ended research - or when scientists don't have a predefined notion of what experiments will need to happen in what steps to reach for the desired outcome. The current infrastructure for managing lab data was largely set up in the image of lab notebooks. Many companies are tackling this problem by trying to retrofit data generated in this model to fit the structure required for more in-depth data analysis. At ECL, we’ve tackled this problem by proceduralizing the lab activities themselves, as well as the storage of the data encompassing those activities. In this way, data is comprehensive, organized, reproducible, and ready to be deployed into any given analysis model. ... As scientists and companies recognize the reproducibility and trustworthiness of data generated in a cloud lab like ECL, their focus will shift away from concern over laboratory operations and logistics and more towards the science itself. 


From The Great Resignation To The Great Return: Bringing Back The Workforce

The biggest challenge is putting enormous pressure on employees who don’t want to leave their job. Since talent leaders can’t fill open roles fast enough, employees that want to stay have had to take on the employment of multiple people in addition to their day-to-day responsibilities. In addition to that, it’s a candidate’s market, and job seekers have many job options and often have multiple offers. As a result, companies have to make hiring decisions faster and offer better benefits to attract talent and stand out among other companies. Another challenge, according to Cassady, is that employees are missing key connections points in this remote environment. “We have found that some of the key factors in retaining your workforce are that people need to feel connected to the company’s mission, the company’s leaders, and a connection to the team they work with.” In addition, she adds, “Talent leaders must continue to create communities within their company to retain their employees.”


The new rules of succession planning

First, start with the what and not the who. Doing so will lay out a more realistic and substantive framework. Second, from this vantage point, try to explicitly minimize the noise in the boardroom. Ensure that the directors are using shared, contextual definitions of core jargon, such as strategy, agility, transformation, and execution. Third, root the follow-on analyses of the candidates in that shared understanding, and base any assessments on a factual evaluation of their track records and demonstrated potential in order to minimize the bias of the decision-makers themselves. Many companies sidestep this hard work when developing their short list of candidates and rely instead on familiar paths: the CEO may have preferred candidates, or a search firm or industrial psychologist may have been asked to draft an ideal role profile or a set of competencies to prescreen internal and external candidates. This overemphasis on profiling the who of the next CEO triggers two failure points. It leans right into “great leader” biases (the notion that the right person will single-handedly solve all the company’s problems).


IT jobs: 7 hot automation skills in 2022

“One of the most important approaches to automation is infrastructure as code,” says Chris Nicholson, head of the AI team at Clipboard Health. “Infrastructure as code makes it easier to spin up and manage large clusters of compute, which in turn makes it easier to introduce new products and features quickly, and to scale in response to demand.” Kelsey Person, senior project manager at the recruiting firm LaSalle Network, agrees: Experience with infrastructure as code pops on a resume right now, because it indicates the knowledge and ability needed to help drive significant automation initiatives elsewhere. “One skill we are seeing in more demand is knowledge of DevOps tools, namely Ansible,” Person says. “It can help organizations automate and simplify tasks and can save time when developers and DevOps professionals are installing packages or configuring many servers.” The ability to write homegrown automation scripts is a mainstay of automation-centric jobs – it’s essentially the skill that never goes out of style, even as a wider range of tooling enables non-developers to automate some previously manual processes.


Why cloud-based cellular location is the solution to supply chain disruption

Cloud-based cellular location leveraging 5G, in combination with seamless roaming integrated into a WAN, provides highly accurate end-to-end visibility, starting with sub-metre accuracy on the factory floor with private networks and extending to outdoor locations whenever and wherever an asset is transported, from the beginning to end of a supply chain. Cloud-based cellular location technologies are already in use today, leveraging ubiquitous 4G/5G networks for massive IoT asset tracking applications. Their adoption is expected to increase significantly and broaden to more and more critical IoT use cases as well. According to ABI Research, overall penetration of the cloud-based cellular location installed base will reach 42% by 2026. In this period, it’s estimated that there’ll be a four-fold increase in penetration driven largely by devices on Cat-1, Cat-M, and NB-IoT networks. Asset tracking will be the main driver of growth on these networks, as cloud-based cellular location becomes more important for driving down costs. Cloud-based cellular location can enable enterprises to unlock opportunities for critical IoT, and will help revolutionise supply chain management.



Quote for the day:

"Leadership is the wise use of power. Power is the capacity to translate intention into reality and sustain it." -- Warren Bennis

Daily Tech Digest - February 20, 2022

API Management vs. Service Mesh: The Choice Doesn’t Have to Be Yours

API management is often described as a north-south traffic management pattern, which connects services and applications with external clients. This north-south pattern also applies to inter-domain traffic, as we saw earlier. Companies control access to enterprise or domain boundaries and can discern who is allowed to access the systems, precisely which resources they are allowed to access, whether read and/or write permissions, and with customizable rate limits. This architecture provides authentication, traffic mediation, security, and encryption options, along with sophisticated authorization systems. In essence, it is about helping to manage the relationships between services or APIs and multiple consumers. ... Service meshes provide the connective tissue between services, ensuring that different parts of an application can reliably and securely share data with one another. They route requests from one service to the next, optimizing how all the moving parts work together. Within cloud-native application development approaches, they help to assemble large numbers of discrete services into functional applications. 


Business Technology Consulting is a Way to Start Improving Customers as Business Leaders

IT consultants have good news: their services are still highly sought after. The COVID-19 pandemic has transformed the IT consulting industry. A combination of increased competition and more freelance and smaller specialized consultancies has created a highly competitive market. You will need to start your business on the right foot, just like any other business. IT professionals must create a detailed business plan to succeed in a highly competitive market. Structured plans should include growth, costs, marketing, sales, training, qualifications, and technology. Technology has changed the way that we live, shop and work. The technology revolution is continuing to transform everything about our lives. A robust technological foundation can help organizations increase their agility productivity and identify new business opportunities. Technology consulting can be called many things, including IT consulting for business, IT services, and IT advisory. Companies must develop a secure and efficient Information Technology strategy (IT) strategy to embark on a digital transformation journey. This is not an easy task for start-ups and corporations alike.


The Power and Possibilities of Data Science

Not only have job opportunities for data scientists cropped up everywhere, but the role has transformed the work life of millions of people who benefit from their innovations. Tasks that were once laboriously performed by people have become automated, freeing us humans in legal, financial, and corporate industries (and many others) to focus on more important and well, human work. So how did we get here, and what’s next for this growing industry? Late last year, leaders from Relativity and Text IQ, a Relativity company, gathered to talk about just that. In a Coffee + Chat session presented by Relativity’s talent team, Apoorv Agarwal, Aron Ahmadia, and Peter Haller discussed the origins of data science, where they see the industry going in the next few years, and what about artificial intelligence makes them most excited. “I think of data science as fundamentally people who love data and who believe that data can be used and leveraged to solve problems,” said Aron, director of data science at Relativity. In a previous role he worked with the U.S. Department of Defense, helping to disentangle networks of sex traffickers—and using data science to identify them.


Azure SQL Database ledger

Updatable ledger tables are ideal for application patterns that expect to issue updates and deletions to tables in your database, such as system of record (SOR) applications. Existing data patterns for your application don't need to change to enable ledger functionality. Updatable ledger tables track the history of changes to any rows in your database when transactions that perform updates or deletions occur. An updatable ledger table is a system-versioned table that contains a reference to another table with a mirrored schema. The other table is called the history table. The system uses this table to automatically store the previous version of the row each time a row in the ledger table is updated or deleted. The history table is automatically created when you create an updatable ledger table. ... Append-only ledger tables are ideal for application patterns that are insert-only, such as security information and event management (SIEM) applications. Append-only ledger tables block updates and deletions at the API level. This blocking provides more tampering protection from privileged users such as system administrators and DBAs.


Data Quality Dimensions

Data Quality dimensions compare with the way width, length, and height are used to express a physical object’s size. These Data Quality dimensions help us to understand Data Quality by its scale, and by comparing it to data measured against the same scale. Data Quality ensures an organization’s data can be processed and analyzed easily for any type of project. When the data being used is of high quality, it can be used for AI projects, business intelligence, and a variety of analytics projects. If the data contains errors or inconsistent information, the results of any project cannot be trusted. The accuracy of Data Quality can be measured using Data Quality dimensions. ... Data Quality dimensions can be used to measure (or predict) the accuracy of data. This measurement system allows data stewards to monitor Data Quality, to develop minimum thresholds, and to eliminate the root causes of data inconsistencies. However, there is currently no established standard for these measurements. Each data steward has the option of developing their own measurement system. 


What Is Web3 and How Will it Work?

Proponents envision Web3 as an internet that does not require us to hand over personal information to companies like Facebook and Google in order to use their services. The web would be powered by blockchain technology and artificial intelligence, with all information published on the public ledger of the blockchain. Similar to how cryptocurrency operates, everything would have to be verified by the network before being accepted. Online apps would theoretically let people exchange information or currency without a middleman. A Web3 internet would also be permissionless, meaning anyone could use it without having to generate access credentials or get permission from a provider. Instead of being stored on servers as it is now, the data that makes up the internet would be stored on the network. Any changes to, or movement of, that data would be recorded on the blockchain, establishing a record that would be verified by the entire network. In theory, this prevents bad actors from misusing data while establishing a clear record of where it’s going.


Social engineering: Definition, examples, and techniques

The phrase "social engineering" encompasses a wide range of behaviors, and what they all have in common is that they exploit certain universal human qualities: greed, curiosity, politeness, deference to authority, and so on. While some classic examples of social engineering take place in the "real world"—a man in a FedEx uniform bluffing his way into an office building, for example—much of our daily social interaction takes place online, and that's where most social engineering attacks happen as well. ... Fighting against all of these techniques requires vigilance and a zero-trust mindset. That can be difficult to inculcate in ordinary people; in the corporate world, security awareness training is the number one way to prevent employees from falling prey to high-stakes attacks. Employees should be aware that social engineering exists and be familiar with the most commonly used tactics. Fortunately, social engineering awareness lends itself to storytelling. And stories are much easier to understand and much more interesting than explanations of technical flaws. Quizzes and attention-grabbing or humorous posters are also effective reminders about not assuming everyone is who they say they are.


Decentralization revolutionizes the creator’s economy, but what will it bring?

Much like social tokens, nonfungible tokens (NFTs) are another innovation shaping the creator economy. Consider that the NFT-based crypto art market is now worth over $2.3 billion (as of mid-February 2022), pointing to the lucrative opportunity that artists have in accessing new monetization streams for their work. Meanwhile, NFTs can also be leveraged to engineer a new model of fan engagement as they reconcile virtual assets with real-world experiences. Enter the phygital experience — a mix of physical and digital. NFTs can be tied to real-world perks — if you’re a musician, that could mean a lifetime supply of concert tickets or VIP meet and greets and as an artist, a select number of prints in a collection — all while ensuring that these assets verifiably belong to a fan, attesting to their ownership and authenticity. As economies gradually reopen and we continue to see the eventual normalization of social activities, experiential NFTs as a tool for long-term fan engagement are likely to grow in popularity. Let’s not stop there, though: Enter interactive NFTs. These assets can change over time based on a fan’s modification to the content. 


How CSPs Are Now Using Blockchain

A fundamental issue in cloud computing is a reliance on a centralised server for data management and decision-making. Problems emerge, such as the failure of the central server, which can disrupt the entire system and result in the loss of crucial data kept on the central server. In addition, the central server is vulnerable to hacker attacks. Blockchain technology can help solve this problem because many copies of the same data are saved on various computer nodes in a decentralised system, eliminating the risk of the entire system failing if one server fails. Furthermore, data loss should not be an issue because many copies of the data are stored on various nodes. ... Leading cryptocurrency software company Blockchain achieved savings of 30 per cent by replacing its database layer with Google Cloud Spanner as it moves to managed services on Google Cloud. With millions of users across the globe relying on blockchain for information about and access to their funds, it’s no surprise that one of its core values is Sanctify Security. “Security is our top priority,” says Lewis Tuff, Blockchain’s head of platform engineering.


High Performance Decoupled Buses for IoT Displays

We exploit the fact that across almost all devices, there is similar required behavior. For example, devices have commands and data. The data is often parameters to commands, but sometimes it's a stream of pixels, although that is technically a BLOB parameter to a memory write command. Anyway, on an SPI device, you typically have an additional "DC" line that toggles between commands and data. I2C has something similar, except that the toggle is indicated by a code in the first byte of every I2C transaction. Parallel also has a DC line though it's usually called RS but it does the same thing as the SPI variant. The idea here is we are going to expand the surface area of our bus API to include everything applicable to any kind of bus, so for example, you may have begin_transaction() and end_transaction() which for SPI define transaction boundaries, but do nothing in the parallel rendition. The I2C bus is pretty straightforward, but the SPI bus and parallel buses are significantly more complicated due to having processor specific optimizations. 



Quote for the day:

"One measure of leadership is the caliber of people who choose to follow you." -- Dennis A. Peer

Daily Tech Digest - February 19, 2022

CIO Strategy for Mergers & Acquisitions

The success of merging of two organizations relies on multiple factors like, economic certainties, accurate valuations, proper identification of targets, strong due diligence processes and technology integration. However, the prominent factor among all these is technology integration i.e. merging their IT systems. The IT systems of each organization consists of a set of applications, IT infrastructure, databases, licenses, technologies and their complexities. After integration, one set of systems and their infrastructure becomes redundant. Greater the amount of duplication, higher is the redundancy leading to an increase in costs and complexity of an integration. The role of CIO and Information Technology (IT) in M&A has become increasingly important, as the need for quick turnaround time is the primary factor. The CIO’s need to be involved during the deal preparation, assessment, and due diligence phase of M&A. In addition, the CIO’s team needs to identify key IT processes, IT risks, costs and synergies of the organization.


Eight countries jointly propose principles for mutual recognition of digital IDs

There are 11 principles in total, all contained in a report [PDF] about digital identity in a COVID-19 environment, that the DIWG envisions would be used by all governments when building digital identity frameworks. The principles are openness, transparency, reusability, user-centricity, inclusion and accessibility, multilingualism, security and privacy, technology neutrality and data portability, administrative simplicity, preservation of information, and effectiveness and efficiency. According to the DIWG, the principles aim to allow for a common understanding to guide future discussions on both mutual recognition and interoperability of digital identities and infrastructure. In providing the principles, the DIWG noted that mutual recognition and interoperability of digital identities between countries is still several years away, with the group saying there are foundational activities that need to be undertaken before it can be achieved. These foundational activities include creating a definition of a common language and definitions across digital identities, assessing and aligning respective legal and policy frameworks, and creating interoperable technical models and infrastructure.


Joel Spolsky on Structuring the Web with the Block Protocol

The Block Protocol is not the first attempt, however, at bringing structure to data presented on the web. The problem, says Spolsky, is that previous attempts — such as Schema.org or Dublin Core — have included that structure as an afterthought, as homework that could be left undone without any consequence to the creator. At the same time, the primary benefit of doing that homework was often to game search engine optimization (SEO) algorithms, rather than to provide structured data to the web at large. Search engines quickly caught on to that and began ignoring the content entirely, which led to web content creators abandoning these attempts at structure. Spolsky said this led them to ask one simple question: “What’s a way we can make it so that the web can be better structured, in a way that’s actually easier to write for a web developer than if they [had] left out the structure in the first place?” ... The basic building blocks of the web — HTML and CSS — describe content and how it should be displayed in a human-readable format, “but it doesn’t describe anything about that type of data or what the data is or what it does,” said Spolsky. 


Avoiding the Achilles Heel of Non-European Cybersecurity

US-based organizations are beholden to regulations such as the CLOUD Act and the US PATRIOT Act, which pose a risk to data belonging to any other region. Any application or solution built in the US — be it concerned with cybersecurity, hosting or collaboration — is required to have a backdoor built-in, allowing third parties to access the data within, often without the owner ever knowing — particularly if they’re foreign. Moreover, on his last full day in office and following the large-scale Solar Winds attack, former President Trump signed an executive order decreeing that American IaaS cloud providers must keep a wealth of sensitive information on their foreign clients — names, physical and email addresses, national identification numbers, sources of payment, phone numbers and IP addresses — in order to help US authorities track down cyber-criminals. As these services include “destination” cloud networks, such as AWS, Microsoft Azure, and Google Cloud, it impacts many citizens and companies worldwide. 


5 Questions for Evaluating DBMS Features and Capabilities

Among RDBMSs, both SQL Server and Snowflake use a kind of umbrella data type, VARIANT, to store data of virtually any type. The labor-saving dimension of typing is much less important here. For example, in the case of the VARIANT type, the database must usually be told what to do with this data. The emphasis in this definition of data type goes to the issue of convenience: BLOB and similar types are primarily useful as a means to store data in the RDBMS irrespective of the data’s structure. Google Cloud’s implementation of a JSON “data type” in BigQuery ticks both these boxes. First, it is labor-saving, in that BigQuery knows what to do with JSON data according to its type. Second, it is convenient, in that it gives customers a means to preserve and perform operations on data serialized in JSON objects. The implemenation permits an organization to ingest JSON-formatted messages into the RDBMS (BigQuery) and to preserve them intact. Access to raw JSON data could be valuable for future use cases. It also makes it much easier for users to access and manipulate this data


Digital payments: How banks can stave off fintech challengers

To safeguard their payments business, banks must pursue two main objectives: replace their existing legacy systems and improve the payment services and functionality they offer to retail and corporate customers. In this way, banks can ensure that their provision of payment services remains intact. Some banks have tried to solve this problem by acquiring a fintech challenger. Others have sought to build their own technology from scratch – although this has been shown to carry risks. However, one of the best options for banks is to find new partners, both in terms of technology and services, which they can work with to create a more loosely defined infrastructure for payment services. This in turn, will help them to become more agile in the payments sphere, according to Frank. “Banks like JP Morgan are a standard bearer here and commit huge sums to tech investment annually,” says Frank. “The key is to target a more agile tech stack both in terms of infrastructure – that is in terms of cloud adoption, enhanced security, devices and networks, as well as applications – whether it is delivered as a Software-as-a-Service (SaaS) or a white-labelled service.”


Cloud Data Management Disrupts Storage Silos and Team Silos Too

In the context of enterprise data storage, unstructured data management has been a practice for many years, although it originated in storage vendor platforms. Now that enterprises are using many different storage technologies — block storage for database and virtualization, NAS for user and application workloads, backup solutions in the data center or in the cloud — a storage-centric approach to data management no longer fits the bill. That’s because, among other reasons, storage vendor data management solutions don’t solve the problem of managing silos of data stored on different platforms. Silos hamper visibility and governance, leading to higher costs and poor utilization. As more workloads and data move to the cloud to save money and enable flexibility and innovation, cloud data management has become a growing practice. Cloud data management (CDM) goes beyond storage to meet the ever-changing needs for data mobility and access, cost management, security and, increasingly, data monetization. 


Executive Q&A: Data Management and the Cloud

Understanding which type of cloud database is the right fit is often the biggest challenge. It’s helpful to think of cloud-native databases as being in one of two categories: platform-native systems (i.e., offerings by cloud providers themselves) or in-cloud systems offered by third-party vendors. Platform-native solutions include Azure Synapse, BigQuery, and Redshift. They offer deep integration with the provider’s cloud. Because they are highly optimized for their target infrastructure, they offer seamless and immediate interoperability with other native services. Platform-native systems are a great choice for enterprises that want to go all-in on a given cloud and are looking for simplicity of deployment and interoperability. In addition, these systems offer the considerable advantage of having to deal with a single vendor only. In contrast, in-cloud systems tout cloud independence. This seems like a great advantage at first. However, moving hundreds of terabytes between clouds has its own challenges. In addition, customers inevitably end up using other platform-native services that are only available on a given cloud, which further reduces the perceived advantage of cloud independence.


The metaverse is a new word for an old idea

These are good conversations to have. But we would be remiss if we didn’t take a step back to ask, not what the metaverse is or who will make it, but where it comes from—both in a literal sense and also in the ideas it embodies. Who invented it, if it was indeed invented? And what about earlier constructed, imagined, augmented, or virtual worlds? What can they tell us about how to enact the metaverse now, about its perils and its possibilities? There is an easy seductiveness to stories that cast a technology as brand-new, or at the very least that don’t belabor long, complicated histories. Seen this way, the future is a space of reinvention and possibility, rather than something intimately connected to our present and our past. But histories are more than just backstories. They are backbones and blueprints and maps to territories that have already been traversed. Knowing the history of a technology, or the ideas it embodies, can provide better questions, reveal potential pitfalls and lessons already learned, and open a window onto the lives of those who learned them. 


Slow Down !! Cloud is Not for Everyone

“Most often It’s not the main course but Desserts that bloat your Bill” In the cloud, it’s not only the cost of compute and memory, but the cost of lock-in. Assume you have an on-prem license of a database enterprise edition that couldn’t be ported to the cloud (incompatibility or contractual complications or much higher cloud licenses) and you opt to move into a native DB offered by your chosen cloud provider. What might appear as straight-cut migration efforts is basically a much deeper trap of locking you in with your cloud vendor. As the first step, you need to train your workforce; then slowly, you will be mandated to rewrite or replace all the homegrown and/or SAS features of your product to be compatible with the new service. These efforts are something that was never part of your earlier plan but now has become a critical necessity to keep the lights on. Say after a certain period when you realize the cloud service is not a great fit and you decide to shift back or move-on to a better alternate there comes the insidious lock-in effect. They make such onward movement particularly difficult – you need to burn significant dollars to migrate out.



Quote for the day:

"When people talk, listen completely. Most people never listen." -- Ernest Hemingway

Daily Tech Digest - February 18, 2022

TrickBot Ravages Customers of Amazon, PayPal and Other Top Brands

The TrickBot malware was originally a banking trojan, but it has evolved well beyond those humble beginnings to become a wide-ranging credential-stealer and initial-access threat, often responsible for fetching second-stage binaries such as ransomware. Since the well-publicized law-enforcement takedown of its infrastructure in October 2020, the threat has clawed its way back, now sporting more than 20 different modules that can be downloaded and executed on demand. It typically spreads via emails, though the latest campaign adds self-propagation via the EternalRomance vulnerability. “Such modules allow the execution of all kinds of malicious activities and pose great danger to the customers of 60 high-profile financial (including cryptocurrency) and technology companies,” CPR researchers warned. “We see that the malware is very selective in how it chooses its targets.” It has also been seen working in concert with a similar malware, Emotet, which suffered its own takedown in January 2021.


‘Ice phishing’ on the blockchain

There are multiple types of phishing attacks in the web3 world. The technology is still nascent, and new types of attacks may emerge. Some attacks look similar to traditional credential phishing attacks observed on web2, but some are unique to web3. One aspect that the immutable and public blockchain enables is complete transparency, so an attack can be observed and studied after it occurred. It also allows assessment of the financial impact of attacks, which is challenging in traditional web2 phishing attacks. Recall that with the cryptographic keys (usually stored in a wallet), you hold the key to your cryptocurrency coins. Disclose that key to an unauthorized party and your funds may be moved without your consent. Stealing these keys is analogous to stealing credentials to web2 accounts. Web2 credentials are usually stolen by directing users to an illegitimate web site through a set of phishing emails. While attackers can utilize a similar tactic on web3 to get to your private key, given the current adoption, the likelihood of an email landing on the inbox of a cryptocurrency user is relatively low.


Cloud Security Alliance publishes guidelines to bridge compliance and DevOps

As for tooling, CSA called for organisations to embrace infrastructure as-code to eliminate manual provisioning of infrastructure. They can do so through services such as AWS Cloud Formation or capabilities from the likes of Chef, Ansible and Terraform, paving the way for automation, version control and governance. Organisations can also establish guardrails to constantly monitor software deployments to ensure alignment with their goals and objectives, including compliance. These guardrails can be represented as high-level rules with detective and preventive policies. Guardrails may be implemented as a means of compliance reporting, such as the number of machines running approved operating systems (OSes), or as remedies to non-compliance, such as shutting down machines running unapproved OSes. With a tendency to address risk directly through tooling, organisations can easily overlook the importance of having the appropriate mindset in DevSecOps transformation. CSA defines mindset as the ways to bring security teams and software developers closer together.


Use of Artificial Intelligence in the Banking World 2022

Chatbots are one of the most-used applications of artificial intelligence, not only in banking but across the spectrum. Once deployed, AI chatbots can work 24/7 to be available for customers. In fact, in several surveys and market research studies, it has been found that people actually prefer interacting with bots instead of humans. This can be attributed to the use of natural language processing for AI chatbots. With NLP, AI chatbots are better able to understand user queries and communicate in a seemingly humane way. An example of AI chatbots in banking can be seen in the Bank of America with Erica, the virtual assistant. Erica handled 50 million client requests in 2019 and can handle requests including card security updates and credit card debt reduction. Digital-savvy banking customers today need more than what traditional banking can offer. With AI, banks can deliver the personalized solutions that customers are seeking. An Accenture survey suggested that 54% of banking customers wanted an automated tool to help monitor budgets and suggest real-time spending adjustments.
 

Data democratisation and AI—The superpowers to augment customer experience in 2022

With the powerful combination of data and AI at their fingertips, teams can gain deeper insights into their customers. Such technologies can also provide recommendations to the next-best-action. Critical decisions such as the right message, right channel, and time can be optimised to boost efficiency as well as delight consumers. For example, ecommerce brands can identify customers who buy from a specific luxury brand and personalise offers. Banks can determine customers who have not completed the onboarding journey and eliminate roadblocks to help them move towards completion. Music streaming apps can create custom playlists for each listener based on their preferred music and artists. In the past, these insights were gathered from multiple platforms, most times with the help of technology or data teams running Big Data queries. The time required to run these queries, draw insights, and then apply them was often long. Which meant, brands could not go to the market faster.


Federated Machine Learning and Edge Systems

It helps to look at a practical use case. We're going to look at Federated Learning of Cohorts, or FLoCs, also developed by Google. It's essentially a proposal to do away with third-party cookies, because they're awful and nobody likes them, and they are a privacy nightmare. Many browsers are removing functionality for third-party cookies or automatically blocking third-party cookies. But what should we use in order to do targeted, personalized advertising if we don't have third-party cookies? That's what Google proposed FLoCs would do. Their idea for FLoCs is that you get assigned a cohort based on something you like, your browsing history, and so on. In the diagram below, we have two different cohorts: a group that likes plums and a group that likes oranges. Perhaps, if you were a fruit seller, you might want to target the plum cohort with plum ads and the orange cohort with orange ads, for example. The goal was to resolve the privacy problems and the poor user experience of online targeted ads, where sometimes a user would click on something and it would follow them for days.


Inside Look at an Ugly Alleged Insider Data Breach Dispute

The Premier lawsuit alleges that changes to the company's security controls - including disabling endpoint security - allowed Sohail's continued access to Premier "trade secrets" and other sensitive information after his resignation as CIO. It says that Sohail "colluded with or coerced" Pakistan-based Sajid Fiaz, who served as Premier's IT administrator while also being employed at Wiseman Innovations as an IT infrastructure manager and HIPAA officer. Premier alleges that Fiaz's actions related to its data security provided Sohail "unfettered access to the master password for endpoint security that enabled that data theft and misuse through USB drives connected to secure IT systems." It says Sohail had unrestricted access to copying data to and from the company laptops and that a forensic report showed that he retained and accessed .PST files of emails from Premier after resigning as CIO in. ".PST files are an aggregated archive of all emails sent to and from an email address including all attachments," Premier says in court documents. 


How challenging is corporate data protection?

When employees quit their jobs, there is a 37% chance an organization will lose IP. With 96% of companies noting they experience challenges in protecting corporate data from insider risk, it’s clear insider risk must be prioritized. However, ownership of the problem remains vaguely defined. Only 21% of companies’ cybersecurity budgets have a dedicated component to mitigate insider risk, and 91% of senior cybersecurity leaders still believe that their companies’ Board requires better understanding of insider risk. “With employee turnover and the shift to remote and collaborative work, security teams are struggling to protect IP, source code and customer information. This research highlights that the challenge is even more acute when a third of employees who quit take IP with them when they leave. On top of that, three-quarters of security teams admit that they don’t know what data is leaving when employees depart their organizations,” said Joe Payne, Code42 president and CEO. “Companies must fundamentally shift to a modern data protection approach – insider risk management (IRM) – that aligns with today’s cloud-based, hybrid-remote work environment and can protect the data that fuels their innovation, market differentiation and growth.”


High-Severity RCE Bug Found in Popular Apache Cassandra Database

John Bambenek, principal threat hunter at the digital IT and security operations company Netenrich, told Threatpost on Wednesday that he suspects that the non-default settings are “common in many applications around the world.” The situation isn’t looking as bad as Log4j, but it could still potentially be widespread, and it’s going to be a chore to dig out vulnerable installations, Bambenek said via email. “Unfortunately, there is no way to know exactly how many installations are vulnerable, and this is likely the kind of vulnerability that will be missed by automated vulnerability scanners,” he said. “Enterprises will have to go into the configuration files of every Cassandra instance to determine what their risk is.” Casey Bisson, head of product and developer relations at code-security solutions provider BluBracket, told Threatpost that the issue could have “a broad impact with very serious consequences,” as in, “Threat actors may be able to read or manipulate sensitive data in vulnerable configurations.”


Top tips for entering an IT partnership for the first time

For businesses looking to strike up an IT partnership, it is crucial to ensure that potential partners are on the same page and are both working towards similar outcomes. Mutual visions lead to increased understanding, trust, and judgement throughout the project. Therefore, it is highly recommended for organisations to spend sufficient time when selecting their partner to understand their exact values, ideas, goals and ambitions. In order to accomplish this, it is recommended to physically visit potential partners, understand their culture and apply a human-to-human approach – whilst understanding that this is not a one-time project but an ongoing process to improve and build a solid relationship. Businesses must ensure the relevancy of the partner to the project by matching the usefulness of their skills, ideas, and experience to the client’s project. Establishing a partnership contrary to client needs will not only lead to customer dissatisfaction, but also to the overlooked understanding of the value of a partnership.



Quote for the day:

"A leader should demonstrate his thoughts and opinions through his actions, not through his words." -- Jack Weatherford

Daily Tech Digest - February 17, 2022

Reflections on Failure, Part One

There are many reasons why failures in security are inevitable. As I wrote previously, the human minds practicing security are fatally flawed and will therefore make mistakes over time. And even if our reasoning abilities were free of bias, we would still not know everything there is to know about every possible system. Security is about reasoning under uncertainty for both attacker and defender, and sometimes our uncertainty will result in failure. None of us know how to avoid all mistakes in our code, all configuration errors, and all deployment issues. Further still, learning technical skills in general and “security” in particular requires a large amount of trial and error over time. But we can momentarily disregard our biased minds, the practically unbridgeable gap between what we can know and what is true, and even the simple need to learn skills and knowledge on both sides of the fence. The inevitability of failure follows directly from our earlier observation about conservation. If failure is conserved between red and blue, then every action in this space can be interpreted as one. 


APIs in Web3 with The Graph — How It Differs from Web 2.0

The Graph protocol is being built by a company called Edge & Node, which Yaniv Tal is the CEO of. Nader Dabit, a senior engineer who I interviewed for a recent post about Web3 architecture, also works for Edge & Node. The plan for the company seems to be to build products based on The Graph, as well as make investments in the nascent ecosystem. There’s some serious API DNA in Edge & Node. Three of the founders (including Tal) worked together at MuleSoft, an API developer company acquired by Salesforce in 2018. MuleSoft was founded in 2007, near the height of Web 2.0. Readers familiar with that era may also recall that MuleSoft acquired the popular API-focused blog, ProgrammableWeb, in 2013. Even though none of the Edge & Node founders were executives at MuleSoft, it’s interesting that there is a thread connecting the Web 2.0 API world and what Edge & Node hopes to build in Web3. There are a lot of technical challenges for the team behind The Graph protocol — not least of all trying to scale to accommodate multiple different blockchain platforms. Also, the “off-chain” data ecosystem is complex and it’s not clear how compatible different storage solutions are to each other.


Introducing Apache Arrow Flight SQL: Accelerating Database Access

While standards like JDBC and ODBC have served users well for decades, they fall short for databases and clients which wish to use Apache Arrow or columnar data in general. Row-based APIs like JDBC or PEP 249 require transposing data in this case, and for a database which is itself columnar, this means that data has to be transposed twice—once to present it in rows for the API, and once to get it back into columns for the consumer. Meanwhile, while APIs like ODBC do provide bulk access to result buffers, this data must still be copied into Arrow arrays for use with the broader Arrow ecosystem, as implemented by projects like Turbodbc. Flight SQL aims to get rid of these intermediate steps. Flight SQL means database servers can implement a standard interface that is designed around Apache Arrow and columnar data from the start. Just like how Arrow provides a standard in-memory format, Flight SQL saves developers from having to design and implement an entirely new wire protocol. As mentioned, Flight already implements features like encryption on the wire and authentication of requests, which databases do not need to re-implement.


The Graph (GRT) gains momentum as Web3 becomes the buzzword among techies

One of the main reasons for the recent increase in attention for The Graph is the growing list of subgraphs offered by the network for popular decentralized applications and blockchain protocols. Subgraphs are open application programming interfaces (APIs) that can be built by anyone and are designed to make data easily accessible. The Graph protocol is working on becoming a global graph of all the world’s public information, which can then be transformed, organized and shared across multiple applications for anyone to query. ... A third factor helping boost the prospects for GRT is the rising popularity of Web3, a topic and sector that has increasingly begun to make its way into mainstream conversations. Web3 as defined by Wikipedia is an “idea of a new iteration of the World Wide Web that is based on blockchain technology and incorporates concepts such as decentralization and token-based economics.” The overall goal of Web3 is to move beyond the current form of the internet where the vast majority of data and content is controlled by big tech companies, to a more decentralized environment where public data is more freely accessible and personal data is controlled by individuals.


Brain-Inspired Chips Good for More than AI, Study Says

Neuromorphic chips typically imitate the workings of neurons in a number of different ways, such as running many computations in parallel. ... Furthermore, whereas conventional microchips use clock signals fired at regular intervals to coordinate the actions of circuits, the activity in neuromorphic architecture often acts in a spiking manner, triggered only when an electrical charge reaches a specific value, much like what happens in brains like ours. Until now, the main advantage envisioned with neuromorphic computing to date was in power efficiency: Features such as spiking and the uniting of memory and processing resulted in IBM’s TrueNorth chip, which boasted a power density four orders of magnitude lower than conventional microprocessors of its time. “We know from a lot of studies that neuromorphic computing is going to have power-efficiency advantages, but in practice, people won’t care about power savings if it means you go a lot slower,” says study senior author James Bradley Aimone, a theoretical neuroscientist at Sandia National Laboratories in Albuquerque.


Could Biology Hold the Clue to Better Cybersecurity?

The framework is designed to inoculate a user from ransomware, remote code execution, supply chain poisoning, and memory-based attacks. "If we're going to change the way we protect assets, we need to take a completely different approach," says Dave Furneaux, CEO of Virsec. "Companies are spending more and more money on solutions and not seeing any improvement." Furneaux likens the approach to the mRNA technology that vaccine makers Moderna and Pfizer have used. "Once you determine how to adapt a cell and the way it might behave in response to a threat, you can better protect the organism," Furneaux says. In biology, the approach relies on an inside-out approach. In cybersecurity, the method goes down into the lowest building blocks of software — which are like the cells in a body — to protect the entire system. "By understanding the RNA and DNA, we can create the equivalent of a vaccine," Furneaux adds. Other cybersecurity vendors, including Darktrace, Vectra AI, and BlackBerry Cybersecurity, have also developed products that rely to some degree on biological models.


In the Web3 Age, Community-Owned Protocols Will Deliver Value to Users

It's a virtuous cycle, Oshiro told Decrypt. As adoption of 0x increases, the protocol becomes Web3's foundational layer for tokenized value exchange. That, in turn, drives adoption by integrators who build on 0x, generating more economic value for themselves and users—ultimately bringing the trillions of dollars of economic value that the Internet has already created to the users of the next-generation decentralized Internet. ... Building exchange infrastructure on top of rapidly evolving blockchains means the 0x Protocol will need to be constantly tweaked and improved. Since its launch, 0x has been gradually transitioning all decisions over infrastructure upgrades and management of the treasury to its token holders. “The ability to upgrade comes along with an immense amount of power and a ton of downstream externalities,” Warren said. “And so it’s critical that the only ones who can update the infrastructure are the stakeholders and the people who are building businesses on top of it—that is how we’re thinking about this.”


How to Make Cybersecurity Effective and Invisible

CIOs have a balance to strike: Security should be robust, but instead of being complicated or restrictive, it should be elegant and simple. How do CIOs achieve that "invisible" cybersecurity posture? It requires the right teams, superior design, and cutting-edge technology, processes, and automation. Expertise and Design: Putting the Right Talent and Security Architecture to Work for You. Organizations hoping to achieve invisible cybersecurity must first focus on talent and technical expertise. Security can no longer be handled only through awareness, policy, and controls. It must be baked into everything IT does as a fundamental design element. The IT landscape should be assessed for weaknesses, and an action plan should then be put in place to mitigate risk through short-term actions. Long term, organizations need to design a landscape that is more compartmentalized and resilient, by implementing strategies like zero trust and microsegmentation. For this, companies need the right expertise. Given cybersecurity workforce shortages, organizations may need to identify and onboard an IT partner with strong cyber capabilities and offerings.


Secure Code Quickly as You Write It

Most developers aren’t security experts, so tools that are optimized for the needs of the security team are not always efficient for them. A single developer doesn’t need to know every bug in the code; they just need to know the ones that affect the work they’ve been assigned to fix. Too much noise is disruptive and causes developers to avoid using security tools. Developers also need tools that won’t disrupt their work. By the time security specialists find issues downstream, developers have moved on. Asking them to leave the IDE to analyze issues and determine potential fixes results in costly rework and kills productivity. Even teams that recognize the upside of checking their code and open source dependencies for security issues often avoid the security tools they’ve been given because it drags down their productivity rates. What developers need are tools that provide fast, lightweight application security analysis of source code and open source dependencies right from the IDE. Tooling like this enables developers to focus on issues that are relevant to their current work without being burdened by other unrelated issues.


Data Patterns on Edge

Most of the data in the internet space fall into this bucket, the enterprise data set comprises many interdependent services working in a hierarchical nature to extract required data sets which could be personalized or generic in format. The feasibility of moving this data to edge traditionally was limited to supporting static resources or header data set or media files to edge or the CDN, however the base data set was pretty much retrieved from the source DC or the cloud provider. When you look at the User experiences, the optimization surrounds the principles of Critical rendering path and associated improvement in the navigation timelines for web-based experiences and around how much of the view model is offloaded to the app binary in device experiences. In hybrid experiences, the state model is updated periodically from the server push or poll. The use case in discussion is how we can enable data retrieval from the edge for data sets that are personalized.



Quote for the day:

"A leader is the one who climbs the tallest tree, surveys the entire situation and yells wrong jungle." -- Stephen Covey