Daily Tech Digest - December 18, 2021

10 Key AI & Data Analytics Trends for 2022 and Beyond

Whilst most research is understandably focused on pushing the boundaries of complexity, the reality is that training and running complex models can have a big impact on the environment. It’s predicted that data centres will represent 15% of global CO2 emissions by 2040, and a 2019 research paper, “Energy considerations for Deep Learning,” found that training a natural language translation model emitted CO2 levels equivalent to four family cars over their lifetime. Clearly, the more training, the more CO2 is released. With a greater understanding of environmental impact, organisations are exploring ways to reduce their carbon footprint. Whilst we can now use AI to make data centres more efficient, the world should expect to see more interest in simple models that perform as well as complex ones for solving specific problems. Realistically, why should we use a 10-layer convolutional neural network when a simple bayesian model performs equally well while using significantly less data, training, and compute power? “Model efficiency” will become a byword for environmental AI, as creators focus on building simple, efficient, and usable models that don't cost the earth.


“Digital Twin” with Python: A hands-on example

IBM defines a digital twin as follows “A digital twin is a virtual model designed to accurately reflect a physical object”. They go on to describe how the main enabling factors for creating a digital twin are the sensors that gather data and the processing system that inserts the data in some particular format/model into the digital copy of the object. Further, IBM says “Once informed with such data, the virtual model can be used to run simulations, study performance issues and generate possible improvements”. ... So, how do we use our favorite language Python to create a digital twin? Why do we even think it will work? The answer is deceptively simple. Just look at the figure above and then at the one below to see the equivalency between a Digital Twin model and a classic Python object. We can emulate the sensors and data processors with suitable methods/functions, store the gathered data in a database or internal variables, and encapsulate everything into a Python class.


Patterns for Authorization in Microservices

When you have a monolith, you generally only need to talk to one database to decide whether a user is allowed to do something. An authorization policy in a monolith doesn't need to concern itself too much with where to find the data (such as user roles) — you can assume all of it is available, and if any more data needs to be loaded, it can be easily pulled in from the monolith's database. But the problem gets harder with distributed architectures. Perhaps you're splitting your monolith into microservices, or you're developing a new compute-heavy service that needs to check user permissions before it runs jobs. Now, the data that determines who can do what might not be so easy to come by. You need new APIs so that your services can talk to each other about permissions: "Who's an admin on this organization? Who can edit this document? Which documents can they edit?" To make a decision in service A, we need data from service B. How does a developer of service A ask for that data? How does a developer of service B make that data available?


How the use of AI and advanced technology is revolutionizing healthcare

In the payments realm, Mastercard® Healthcare Solutions optimizes the workflow for payers and providers by automating repetitive and error-prone operations, such as billing and claims processing. According to CIO magazine, many hospitals are now using AI to automate mundane tasks, reduce workloads, eliminate errors and speed up the revenue cycle. The author notes AI’s effectiveness for reducing incorrect payments for erroneous billings, and for preventing the labor-intensive process of pulling files, resubmitting to payers and eventual payment negotiations. ... The successful use of AI for FWA prevention is increasing in popularity. A recent study by PMYNTS revealed that approximately 12 percent of the 100 sector executives surveyed use AI in healthcare payments, three times the number using AI in 2019. Nearly three-quarters of the 100 execs plan to implement AI by 2023. ... These are all important factors when building an AI model and show the need to demonstrate return on investment (ROI) through a proof of concept.


IBM Brings AI to Monitor Petabytes of Network Traffic

“As we surround applications with our capabilities, we will understand the traffic flow and the performance and what’s normal,” Coward says. “The longer you run the AI within the network, the more you know about what typically happens on a Tuesday afternoon in Seattle.” A key aspect of SevOne is the ability to take raw network performance data from sources–such as SNMP traps, logs in Syslog formats, and even packets captured from network taps–combine it in a database, and then generate actionable insights from that blended data. “The uniqueness of SevOne is really that we put it into a time-series database. So we understand for all those different events, how are they captured [and] we can correlate them,” Coward explains “That sounds like an extraordinary simple things to do. When you’re trying to do that at scale across a wide network where you literally have petabytes of data being created, it creates its own challenge.” The insights generated from SevOne can take the form of dashboards that anyone can view to see if there’s a network problem, thereby eliminating the need to call IT.


Ethics in Tech: 5 Actions to Lead, Sustain Growth

The rapid deployment of AI into societal decision-making—in areas such as health care recommendations, hiring decisions, and autonomous driving—has catalyzed ongoing ethics discussions regarding trustworthy AI. These considerations are in early stages. Future issues could arise as tech goes beyond AI. Focus is intensifying on the importance of deploying AI-powered systems that benefit society without sparking unintended consequences with respect to bias, fairness, or transparency. Technology is increasingly a focal point in discussions about efforts to deceive using disinformation, misinformation, deepfakes, and other misuses of data to attack or manipulate people. Some tech companies are asking governments to pass regulations clearly outlining responsibilities and standards, and many organizations are cooperating with law enforcement and intelligence agencies to promote vigilance and action. ... Many technology organizations are facing demands from stakeholders to do more than required by law to adopt sustainable measures such as promoting more efficient energy use and supply chains, reducing manufacturing waste, and decreasing water use in semiconductor fabrication.


The 5 Characteristics of a Successful Data Scientist

Everything is connected in some way, well beyond the obvious, which leads to layer upon layer of real world complexity. Complex systems interact with other complex systems to produce additional complex systems of their own, and so goes the universe. This game of complexity goes beyond just recognizing the big picture: where does this big picture fit into the bigger picture, and so on? But this isn't just philosophical. This real world infinite web of complexity is recognized by data scientists. They are interested in knowing as much about relevant interactions, latent or otherwise, as they work through their problems. They look for situation-dependent known knowns, known unknowns, and unknown unknowns, understanding that any given change could have unintended consequences elsewhere. It is the data scientist's job to know as much about their relevant systems as possible, and leverage their curiosity and predictive analytical mindset to account for as much of these systems' operations and interactions as feasible, in order to keep them running smoothly even when being tweaked. 


PYMNTS DeFi Series: Unpacking DeFi and DAO

Like any public blockchain, the open-source code is viewable by the public. Since there is no human being in control, users can be certain the code will execute according to the rules it contains. As the industry saying goes, “code is law.” DAOs are controlled by a type of cryptocurrency called governance tokens, and these give token holders a vote on the project. The investment is based on the idea that as the platform attracts more users and the funds are deposited into its lending pools, the total value locked (TVL) increases and the more valuable its tokens will become. Aave has nearly $14 billion TVL, but the AAVE token is not loaned out. The Aave protocol’s voters have allowed lenders to lock 30 different cryptocurrencies, each of which has interest rates for lenders and borrowers set by the smart contract rules. Different protocols have different voting rules, but almost all come down to this: Token holders can propose a rule change. If it gets enough support, a vote is scheduled; if enough voters support it, the proposal passes, the code is updated, and the protocol’s rules are updated.


How Does Blockchain Help with Digital Identity?

It is well understood that blockchain-based digital identity management is robust and encrypted to ensure security and ease of portability. Hence, mandating its effective incorporation for improving the socio-economic well-being of the users, which is mainly associated with digital identity. With time and advanced technologies, digital identity has become an essential entity that enables users to have various rights and privileges. Although Blockchain has various benefits while managing digital identities, it cannot be considered a panacea. Blockchain technology is continuously developing, and though it offers multiple benefits, there also exist various challenges when aiming to completely replace the traditional identity management methods with the latter. Some of the known challenges include the constantly developing technology and the lack of standardization of data exchange. Considering the benefits that come with transparency and the trust earned through blockchain frameworks, numerous organizations are merging to ensure interoperability across their borders.


Why 2022 Will Be About Databases, Data Mesh, and Open Source Communities

Data lakes will continue their dominance as essential for enabling analytics and data visibility; 2022 will see rapid expansion of a thriving ecosystem around data lakes, driven by enterprises seeking greater data integration. As organizations work out how to introduce data from third-party systems and real-time transactional production workloads into their data lakes, technologies such as Apache Kafka and Pulsar will take on those workloads and grow in adoption. Beyond introducing data to enable BI reporting and analytics, technologies such as Debezium and Kafka Connect will also enable data lake connectivity, powering services that require active data awareness. Expect that approaches leveraging an enterprise message bus will become increasingly common as well. Organizations in a position to benefit from the rise of integration solutions should certainly move on these opportunities in 2022. Related to this trend (and to Trend #1 as well): the emerging concept of a data mesh will really come into its own in 2022. 



Quote for the day:

"The greatest leader is not necessarily the one who does the greatest things. He is the one that gets the people to do the greatest things." -- Ronald Reagan

Daily Tech Digest - December 17, 2021

2022: Supply-Chain Chronic Pain & SaaS Security Meltdowns

With the rise of SaaS adoption, we have witnessed the parallel development of a “business application mesh,” which enables organizations to build custom business logic across multiple, disparate SaaS applications. This mesh also enables transitive trust relationships to be created that enable data to move among these SaaS applications without a central authority that has visibility into or governs the movement of this data. In the past, our IT architecture enabled the enterprise to have a view of how users were interacting with multiple different applications, while remaining at the center of the interactions. But with the business application mesh in place, SaaS applications are connected to each other directly without the enterprise being at the center. GitHub is now automated to interact with Slack on behalf of my organization, for instance. Jira is connected directly with Salesforce. Hubspot sends data to a myriad of other SaaS applications. The growing network of integrations enable automated business workflows and data exchange. 


5 Leadership Trends to Embrace Now to Prepare for 2022

Leaders of the future are paying attention. As we head into 2022, we must create cultures where employee well-being comes first. Change like this starts at the top, and leaders must set an example. Every person on a company’s executive team must be committed to workplace well-being, modeling a holistic lifestyle where top priorities are physical, emotional, mental and spiritual health. The days of work, work and more work are over. People are craving more balance and wellness in life, and leaders who ignore or resist addressing it will be left behind. Second, leaders must build a supportive environment that focuses on the whole person, not just the working portion. A supportive environment offers resources for depression and other mental health issues and incentives for exercise and healthy eating behaviors. Companies must offer EAP services that address mental health and financial, spiritual and social well-being. Creating a supportive environment requires an investment in training. Training on how to create psychological safety where employees feel safe to talk about their well-being.


Optimize your system design using Architecture Framework Principles

When it comes to system design, simplicity is key. If your architecture is too complex to understand, your developers and operations teams can face complications during implementation or ongoing management. Wherever possible, we highly recommend using fully managed services to minimize the risk of managing and maintaining baseline systems, as well as the time and effort required by your teams. If you’re already running your workloads in production, testing managed service offerings can help simplify operational complexities. If you’re starting new, start simple, establish an MVP, and resist the urge to over-engineer. You can identify corner use cases, iterate, and improve your systems incrementally over time. Decoupling is a technique used to separate your applications and service components - such as a monolithic application stack - into smaller components that can operate independently. A decoupled architecture therefore, can run its function(s) independently, irrespective of its various dependencies.


Engineering Manager: Do Not Be a Hero

Our day by day is to support the people and the teams. The teams need to improve their skills, to provide more quality software and many other things. In an ideal world, our organization would have all the resources required or the capacity to provide these team needs. But the real world sometimes is hard, we have to work to achieve it but be careful to promise unrealistic goals. Generating an expectation and not being able to meet it, has a very negative impact on the team. If this behavior occurs many times, the team probably will lose trust in us and probably in the organization. If you are going to work to improve some of their needs, it's important to share with the team and also identify the priorities. Depending on the topic, the timing is also important. I believe in transparency, but transparency doesn´t mean sharing every single thing that goes through your head. For example, if you are working to increase the team salary, it would be good to verify with the organization if there is enough budget to do it before you share it with the team.


Bridging the AppSec and DevOps Disconnect

Culturally, some ingrained attitudes and behaviors challenge the success of any DevSecOps efforts. Security teams have seen DevOps processes accelerate the speed at which software is delivered, but without security considerations, while DevOps teams experienced security slowing down processes and giving inconsistent results and feedback on security issues. Each party has their own manager to please; their own set of metrics that they’re measured against and a priority list as long as their arms already. Both teams follow different processes and, crucially, use different tools. DevOps can’t get around the security tool complexity and lack of integration with their existing toolset and security teams have no control over the CI pipeline to best implement security assurance. One of the best ways to overcome this friction is through better technology, process and culture that enables collaboration between teams. First, DevOps teams do care about security, but it might be lower on their priority list. Security teams must understand that DevOps teams care about code, quality and efficiency.


ARC4 Encryption Library

The ARC4 Cryptography Provider Class Library is a DLL file for .NET projects that includes an implementation of a well-known symmetric encryption algorithm that is not present in the System.Security.Cryptography namespace of the mscorlib library. The cryptographic algorithm, known as ARC4 (Alleged RC4), is a stream cipher that is widely used in various information security systems on computer networks (for example, SSL and TLS protocols, WEP and WPA wireless security algorithms). The original RC4 stream cipher was created by Ronald Rivest of RSA Security. For seven years, the cipher was a trade secret, and the exact description of the algorithm was provided only after the signing of a non-disclosure agreement, but in September 1994 its description was anonymously sent to the mailing list of Cypherpunks. ... Despite the fact that this cipher is not recommended, ARC4 remains popular due to its simplicity of software implementation and high speed of operation. Another important advantage is the variable key length and the same amount of encrypted and original data.


A Scalable Approach for Partially Local Federated Learning

Previous approaches for partially local federated learning used stateful algorithms, which require user devices to store a state across rounds of federated training. Specifically, these approaches required devices to store local parameters across rounds. However, these algorithms tend to degrade in large-scale federated learning settings. In these cases, the majority of users do not participate in training, and users who do participate likely only do so once, resulting in a state that is rarely available and can get stale across rounds. Also, all users who do not participate are left without trained local parameters, preventing practical applications. Federated Reconstruction is stateless and avoids the need for user devices to store local parameters by reconstructing them whenever needed. When a user participates in training, before updating any globally aggregated model parameters, they randomly initialize and train their local parameters using gradient descent on local data with global parameters frozen. They can then calculate updates to global parameters with local parameters frozen.


Why Log4j Mitigation Is Fraught With Challenges

One major challenge organizations face in defending against attacks targeting Log4j is figuring out their full exposure to the threat, according to security experts. The vulnerability can be present not just on an organization's Internet-facing assets, but on internal and back-end systems, network switches, SIEM and other logging systems, internally developed and third-party apps, in SaaS and cloud services, and environments they might not even know about. The interdependencies between different applications and components mean even if a component does not directly have the vulnerability, it can still be affected by it. The way Java packing works can often make it hard to identify affected applications, Noname Security says. As an example, a Java archive (JAR) file might contain all the dependencies — including the Log4j library — of a particular component. But that JAR file might contain another JAR file that, in turn, could contain yet another JAR file — essentially burying the vulnerability several layers deep, the security vendor said.


Why employee burnout must be expected, accepted and supported this winter

All businesses must be mindful of the problem of employee burnout. According to one recent poll, 57% of employers claim that the issue is affecting turnover, retention and productivity. Another survey found that seven out of 10 workers would be willing to move jobs to try and reduce the likelihood of burnout. Across a number of leading economies, the summer of 2021 saw worker resignations reach record levels. Failure to address the burnout question in the months ahead and we could be in for a further wave of resignations early into the new year. Organisations should consider that, according to Deloitte, for every £1 spent by employers on mental health interventions, they get back £5 in reduced absence, presenteeism, and staff turnover. Our advice to progressive organisations is to look around for local examples of best-practice wellbeing support as well as burnout paid time off and apply them across every market in which they employ people. Legal obligations must always be met, wherever you operate.


Digital IDs don’t have to impinge on civil liberties and privacy

When implemented correctly, decentralized digital IDs can make it harder to infringe upon civil liberties and privacy. That said, it’s essential that these IDs are not federated or corporatized but are, instead, self-sovereign identities, fully controlled by the end-user — made entirely possible by blockchain’s trustless verification. Decentralized digital IDs are supported by a wide range of emerging technologies and techniques, leading to the creation of a truly Self-Sovereign ID, or SSI — where users hold full control over their personal data. This includes zero-knowledge proofs, a system that allows one party to verify data to another party without revealing any pertinent information, which ensures that personal information never has to be revealed or retained by third-party verifiers. Having self-sovereign identities linked to purchases and payment rails will facilitate trustless trade that also seamlessly can stay in line with regulatory expectations. Better yet, most of this upgrade would be on a software level.



Quote for the day:

"No matter how much you change, you still have to pay the price for the things you've done." -- Doug MacRay

Daily Tech Digest - December 16, 2021

The New Face of Data Management

Despite the data explosion, IT organizations haven’t necessarily changed storage strategies. They keep buying expensive storage devices because unassailable performance is required for critical or “hot” data. The reality is that all data is not diamonds. Some of it is emeralds and some of it is glass. By treating all data the same way, companies are creating needless cost and complexity. ... Yet as hot data continues to grow, the backup process becomes sluggish. So, you purchase expensive, top-of-line backup solutions to make this faster, but you still need ever-more storage for all these copies of your data. The ratio of unique data (created and captured) to replicated data (copied and consumed) is roughly 1:9. By 2024, IDC expects this ratio to be 1:10. Most organizations are backing up and replicating data that is in fact rarely accessed and better suited to low-cost archives such as in the cloud. Beyond backup and storage costs, organizations must also secure all of this data. A one-size-fits-all strategy means that all data is secured to the level of the most sensitive, critically important data.


Technology and the future of modern warfare

This digital revolution points to a new kind of hyper-modern warfare. Artificial intelligence is a good example of this. If AI can read more data in a minute than a human can read in a year, then its value to militaries is immeasurable. In a recent interview with The Daily Telegraph, the current Chief of General Staff, General Sir Mark Carleton-Smith, has acknowledged that “we are already seeing the implications of artificial intelligence, quantum computing and robotics, and how they might be applied on the battlefield”. Machine learning, for instance, has already been used to harvest key grains of intelligence from the chaff of trivial information that usually inundates analysts. All this is not to say, however, that there will be a complete obsolescence of traditional equipment and means. The British Army remains an industrial age organisation with an industrial skill set, but one which is confronted by innovation challenges. Conventional threats can still materialise at any time. The recent stationing of Russian troops along the Ukrainian border and within the Crimea – in addition to the manoeuvring of its naval forces in the Sea of Azov – is a case in point.


Attacking Natural Language Processing Systems With Adversarial Examples

The attack can potentially be used to cripple machine learning translation systems by forcing them to either produce nonsense, or actually change the nature of the translation; to bottleneck training of NLP models; to misclassify toxic content; to poison search engine results by causing faulty indexing; to cause search engines to fail to identify malicious or negative content that is perfectly readable to a person; and even to cause Denial-of-Service (DoS) attacks on NLP frameworks. Though the authors have disclosed the paper’s proposed vulnerabilities to various unnamed parties whose products feature in the research, they consider that the NLP industry has been laggard in protecting itself against adversarial attacks. The paper states: ‘These attacks exploit language coding features, such as invisible characters and homoglyphs. Although they have been seen occasionally in the past in spam and phishing scams, the designers of the many NLP systems that are now being deployed at scale appear to have ignored them completely.’


When done right, network segmentation brings rewards

Segmentation is an IT approach that separates critical areas of the network to control east-west traffic, prevent lateral movement, and ultimately reduce the attack surface. Traditionally, this is done via an architectural approach – relying on hardware, firewalls and manual work. This can often prove cumbersome and labor intensive, which is a contributing factor in 82% of respondents saying that network segmentation is a “huge task.” ... Modern segmentation uses a software-based approach that is simpler to use, faster to implement and is able to secure more critical assets. The research shows that organizations that leverage the latest approach to segmentation will realize essential security benefits, like identifying more ransomware attacks and reducing time to mitigate attacks. “The findings of the report demonstrate just how valuable a strong segmentation strategy can be for organizations looking to reduce their attack surface and stop damaging attacks like ransomware,” said Pavel Gurvich, SVP, Akamai Enterprise Security.


Neural networks can hide malware, and scientists are worried

As malware scanners can’t detect malicious payloads embedded in deep learning models, the only countermeasure against EvilModel is to destroy the malware. The payload only maintains its integrity if its bytes remain intact. Therefore, if the recipient of an EvilModel retrains the neural network without freezing the infected layer, its parameter values will change and the malware data will be destroyed. Even a single epoch of training is probably enough to destroy any malware embedded in the DL model. However, most developers use pretrained models as they are, unless they want to fine-tune them for another application. And some forms of finetuning freeze most existing layers in the network, which might include the infected layers. This means that alongside adversarial attacks, data-poisoning, membership inference, and other known security issues, malware-infected neural networks are a real threat to the future of deep learning.


5 Key Skills Needed To Become a Great Data Scientist

Data Scientists should develop the habit of critical thinking. It helps in better understanding the problem. Unless the problems are understood to the most granular level the solution can’t be good. Critical thinking helps in analyzing the different options and helps in choosing the right one. While solving data science problems it is not always a good or bad decision. A lot of options lie in the grey area between good and bad. There are so many decisions involved in a data science project. Like, choosing the right set of attributes, the right methodology, the right algorithms, the right metrics to measure the model performance, and so on. ... Coding skills are as much important to a data scientist as eyes are for an artist. Anything that a data scientist would do requires coding skills. From reading data from multiple sources, performing exploratory analysis on the data, building models, and evaluating them. ... Math is another important skill to be understood by data scientists. It will be OK for you to not be aware of some of the math concepts while learning data science. It will not be possible to excel as a data scientist without understanding the math concepts.


DeepMind Now Wants To Study The Behaviour Of Electrons, Launches An AI Tool

Density functional theory (DFT) describes matter at the quantum level, but popular approximations suffer from systematic errors that have arisen from the violation of mathematical properties of the exact functional. DeepMind has overcome this fundamental limitation by training a neural network on molecular data and on fictitious systems with fractional charge and spin. The result was the DM21 (DeepMind 21) tool. It correctly describes typical examples of artificial charge delocalization and strong correlation and performs better than traditional functionals on thorough benchmarks for main-group atoms and molecules. The company claims that DM21 accurately models complex systems such as hydrogen chains, charged DNA base pairs, and diradical transition states. The tool DM21 is a neural network to achieve the state of the art accuracy on large parts of chemistry and to accelerate scientific progress; the code has been open-sourced.


Are venture capitalists misunderstood?

The arrival of growth investors put traditional VCs under further pressure. Whereas Rock and his successors regularly got involved long before there was a product to market, others eventually realized there were opportunities further down the line, and provided vast amounts of capital to established firms that they believed had the potential to become many times larger. These investors, who included Yuri Milner and Masayoshi Son, were irresistible to ambitious tech companies. Unlike VCs, which demanded equity in exchange for funding, Milner and Son did not even want to sit on the board. Mallaby argues that huge capital injections by growth investors (and the VCs that chose to compete with them) resulted in greater control for entrepreneurs, but also weaker corporate governance and ultimately over-reach and ill-discipline. “Precisely at the point when tech companies achieved escape velocity and founders were apt to feel too sure of themselves,” Mallaby writes, “the usual forms of private or public governance would thus be suspended.”


IT security: 4 issues to watch in 2022

If infosec had a greatest hits album, basic security hygiene would be track one. Year in, year out, the root cause of many security incidents can be traced back to the fundamentals. A wide range of threats, from ransomware to cloud account hijacking to data leakage, owe much of their efficacy to surprisingly simple missteps, from a misconfigured setting (or even a default setting left unchanged) to an over-privileged user to unpatched software. ... This begs the question: What are the basics? Things like password hygiene and system patching apply across the board, but you also need to identify and agree with colleagues on “the basics” required in your specific organization. That gives you a collective standard to work toward and measure against. Moreover, the word “basic” doesn’t do some of the fundamentals justice “In my world, the basics are patch management, secure configuration, threat modeling, DAST and SAST scanning, internal and external vulnerability scanning, penetration testing, defense against phishing attacks, third-party vulnerability assessments, backup and disaster recovery, and bespoke security training,” Elston says.


Growing an Experiment-Driven Quality Culture in Software Development

When people share their thoughts and wishes, there might be different needs underlying them. For example, tech leadership can express their desire for global quantitative metrics, while they might actually need to figure out which impact they want to have in the first place and which information they need to have this impact. Remember the teams falling back to everyday business? The systemic part plays a huge role to consider in your experiments. For example, if you’re setting out to improve the quality culture of a team, think about what kind of behavior contributing to quality gets rewarded and how. If a person does a great job, yet these contributions and the resulting impact are not valued, they probably won’t get promoted for it. The main challenges usually come back to people’s interactions as well as the systems in which we are interacting. This includes building trustful relationships and shaping a safe, welcoming space where people can bring their whole authentic selves and have a chance to thrive. 



Quote for the day:
 
"The strong do what they have to do and the weak accept what they have to accept." -- Thucydides

Daily Tech Digest - December 15, 2021

Unstructured Data Will Be Key to Analytics in 2022

Many organizations today have a hybrid cloud environment in which the bulk of data is stored and backed up in private data centers across multiple vendor systems. As unstructured (file) data has grown exponentially, the cloud is being used as a secondary or tertiary storage tier. It can be difficult to see across the silos to manage costs, ensure performance and manage risk. As a result, IT leaders realize that extracting value from data across clouds and on-premises environments is a formidable challenge. Multicloud strategies work best when organizations use different clouds for different use cases and data sets. However, this brings about another issue: Moving data is very expensive when and if you need to later move data from one cloud to another. A newer concept is to pull compute toward data that lives in one place. That central place could be a colocation center with direct links to cloud providers. Multicloud will evolve with different strategies: sometimes compute comes to your data, sometimes the data resides in multiple clouds.


Developing Event-Driven Microservices

Microservices increasingly use event-driven architectures for communication and related to this many data-driven systems are also employing an event sourcing pattern of one form or another. This is when data changes are sent via events that describe the data change that are received by interested services. Thus, the data is sourced from the events, and event sourcing in general moves the source of truth for data to the event broker. This fits nicely with the decoupling paradigm of microservices. It is very important to notice that there are actually two operations involved in event sourcing, the data change being made and the communication/event of that data change. There is, therefore, a transactional consideration and any inconsistency or failure causing a lack of atomicity between these two operations must be accounted for. This is an area where TEQ has an extremely significant and unique advantage as it, the messaging/eventing system, is actually part of the database system itself and therefore can conduct both of these operations in the same local transaction and provide this atomicity guarantee.
 

Quantum computing use cases are getting real—what you need to know

Most known use cases fit into four archetypes: quantum simulation, quantum linear algebra for AI and machine learning, quantum optimization and search, and quantum factorization. We describe these fully in the report, as well as outline questions leaders should consider as they evaluate potential use cases. ... Quantum computing has the potential to revolutionize the research and development of molecular structures in the biopharmaceuticals industry as well as provide value in production and further down the value chain. In R&D, for example, new drugs take an average of $2 billion and more than ten years to reach the market after discovery. Quantum computing could make R&D dramatically faster and more targeted and precise by making target identification, drug design, and toxicity testing less dependent on trial and error and therefore more efficient. A faster R&D timeline could get products to the right patients more quickly and more efficiently—in short, it would improve more patients’ quality of life. Production, logistics, and supply chain could also benefit from quantum computing.


How Extended Security Posture Management Optimizes Your Security Stack

XSPM helps the security team to deal with the constant content configuration churn and leverages telemetry to help identify the gaps in security by generating up-to-date emerging threats feeds and providing additional test cases emulating TTPs that attackers would use, saving DevSocOps the time needed to develop those test cases. When running XSPM validation modules, knowing that the tests are timely, current, and relevant enables reflecting on the efficacy of security controls and understanding where to make investments to ensure that the configuration, hygiene and posture are maintained through the constant changes in the environment. By providing visibility and maximizing relevancy, XSPM helps verify that each dollar spent benefits risk reduction and tool efficacy through baselining and trending and automatically generating reports containing detailed recommendations covering security hardening and tool stack optimization; it dramatically facilitates conversations with the board.


Edge computing keeps moving forward, but no standards yet

As powerful as this concept of seemingly unlimited computing resources may be, however, it does raise a significant, practical question. How can developers build applications for the edge when they don’t necessarily know what resources will be available at the various locations in which their code will run? Cloud computing enthusiasts may point out that a related version of this same dilemma faced cloud developers in the past, and they developed technologies for software abstraction that essentially relieved software engineers of this burden. However, most cloud computing environments had a much smaller range of potential computing resources. Edge computing environments, on the other hand, won’t only offer more choices, but also different options across related sites (such as all the towers in a cellular network). The end result will likely be one of the most heterogeneous targets for software applications that has ever existed. Companies like Intel are working to solve some of the heterogeneity issues with software frameworks. 


The Mad Scramble To Lead The Talent Marketplace Market

While this often starts as a career planning or job matching system, early on companies realize it’s a mentoring tool, a way to connect to development programs, a way to promote job-sharing and gig work, and a way for hiring managers to find great staff. In reality, this type of solution becomes “the system for internal mobility and development,” so companies like Allstate, NetApp, and Schneider see it as an entire system for employee growth. Other companies, like Unilever, see it as a way to promote flexible work. These companies use the Talent Marketplace to encourage agile, gig-work and help people find projects or developmental assignments. Internal gig work and cross-functional projects are a massive trend (movement toward Agile), within a given function (IT, HR, Customer Service, Finance) it’s incredibly powerful. And since the marketplace democratizes opportunities, companies like Seagate see this as a diversity platform as well.


Inside the blockchain developer’s mind: Proof-of-stake blockchain consensus

The real innovation in Bitcoin (BTC) was the creation of an elegant system for combining cryptography with economics to leverage electronic coins (now called “cryptocurrencies”) to use incentives to solve problems that algorithms alone cannot solve. People were forced to perform meaningless work to mine blocks, but the security stems not from the performance of work, but the knowledge that this work could not have been achieved without the sacrifice of capital. Were this not the case, then there would be no economic component to the system. The work is a verifiable proxy for sacrificed capital. Because the network has no means of “understanding” money that is external to it, a system needed to be implemented that converted the external incentive into something the network can understand — hashes. The more hashes an account creates, the more capital it must have sacrificed, and the more incentivized it is to produce blocks on the correct fork. Since these people have already spent their money to acquire hardware and run it to produce blocks, their incentivizing punishment is easy because they’ve already been punished!


Why Intuitive Troubleshooting Has Stopped Working for You

With complicated and complex, I’m using specific terminology from the Cynefin model. Cynefin (pronounced kuh-NEV-in) is a well-regarded system management framework that categorizes different types of systems in terms of how understandable they are. It also lays out how best to operate within those different categories — what works in one context won’t work as well in another — and it turns out that these operating models are extremely relevant to engineers operating today’s production software. Broadly, Cynefin describes four categories of system: obvious, complicated, complex, and chaotic. From the naming, you can probably guess that this categorization ranges from systems that are more predictable and understandable, to those that are less — where predictability is defined by how clear the relationship is between cause and effect. Obvious systems are the most predictable; the relationship between cause and effect is clear to anyone looking at the system. Complicated systems have a cause-and-effect relationship that is well understood, but only to those with system expertise. 


2022 will see a rise in application security orchestration and correlation (ASOC)

For organisations that build software, 2022 will be the year of invisible AppSec. When AppSec tools are run automatically, and when results are integrated with existing processes and issue trackers, developers can be fixing security weaknesses as part of their normal workflows. There is no reason for developers to go to separate systems to “do security,” and no reason they should be scrolling through thousand-page PDF reports from the security team, trying to figure out what needs to be done. When security testing is automated and integrated into a secure development process, it becomes a seamless part of application development. At the same time, organisations are coming to recognise that AppSec is a critical part of risk management, and that a properly implemented AppSec programme results in business benefits. Good AppSec equals fewer software vulnerabilities, which equals less risk of catastrophe or embarrassing publicity, but also results in fewer support cases, fewer emergency updates, higher productivity, and happier customers. But how can organisations turn this knowledge into power?


Why Sustainability Is the Next Priority for Enterprise Software

To meet market and consumer demands, every enterprise will need to evolve their sustainability programs into being just as accurate and rigorous as that of financial accounting. Similarly to the The Sarbanes–Oxley Act of 2002​​ mandates practices in financial record keeping and reporting for corporations in the US, we can expect laws and consumer expectations around sustainability impacts to follow suit. In the same way that SaaS platforms, cloud computing, and digital transformation have changed how enterprises sell, hire, and invest, we’re on the cusp of similar changes within sustainability. For example, as recently as the mid-2000s, interviewing for a new corporate job meant printing out resumes, distributing paper benefits pamphlets, and signing forms that had been Xeroxed a half-dozen times. Today, numerous human resources software companies offer streamlined digital solutions for tracking candidates, onboarding new colleagues, and managing benefits. When large organizations are faced with a high volume of data in any area of their business, digitization is the inevitable solution. 



Quote for the day:

"The level of morale is a good barometer of how each of your people is experiencing your leadership." -- Danny Cox

Daily Tech Digest - December 14, 2021

How eBPF will solve Service Mesh - Goodbye Sidecars

Why have we not created a service mesh in the kernel before? Some people have been semi-jokingly stating that kube-proxy is the original service mesh. There is some truth to that. Kube-proxy is a good example of how close the Linux kernel can get to implementing a service mesh while relying on traditional network-based functionality implemented with iptables. However, it is not enough, the L7 context is missing. Kube-proxy operates exclusively on the network packet level. L7 traffic management, tracing, authentication, and additional reliability guarantees are required for modern applications. Kube-proxy cannot provide this at the network level. eBPF changes this equation. It allows to dynamically extend the functionality of the Linux kernel. We have been using eBPF for Cilium to build a highly efficient network, security, and observability datapath that embeds itself directly into the Linux kernel. Applying this same concept, we can solve service mesh requirements at the kernel level as well. In fact, Cilium already implements a variety of the required concepts such as identity-based security, L3-L7 observability & authorization, encryption, and load-balancing.


General and Scalable Parallelization for Neural Networks

Because different model architectures may be better suited to different parallelization strategies, GSPMD is designed to support a large variety of parallelism algorithms appropriate for different use cases. For example, with smaller models that fit within the memory of a single accelerator, data parallelism is preferred, in which devices train the same model using different input data. In contrast, models that are larger than a single accelerator’s memory capacity are better suited for a pipelining algorithm (like that employed by GPipe) that partitions the model into multiple, sequential stages, or operator-level parallelism (e.g., Mesh-TensorFlow), in which individual computation operators in the model are split into smaller, parallel operators. GSPMD supports all the above parallelization algorithms with a uniform abstraction and implementation. Moreover, GSPMD supports nested patterns of parallelism. For example, it can be used to partition models into individual pipeline stages, each of which can be further partitioned using operator-level parallelism.

What 2022 can hold for the developer experience

To improve the developer experience, and ultimately retain and attract talent, businesses should begin to make changes to reduce the strain placed on developers and help them achieve a healthier work-life balance. The introduction of fairly simple initiatives such as flexi-time and offering mental health days can help to reduce the risk of burnout and show developers that they are valued members of the business whose needs are being listened to. Additionally, organisations could look to provide extra resource and adopt the tools and technology to enable developers to automate parts of their workload. Solutions such as data platforms that make use of machine learning (ML) are a prime example of this. This use of this type of technology would enable developers to easily add automation and predictions to applications without them needing to be experts in ML. Adopting technologies that embed ML capabilities can also help to simplify the process of building, testing, and deploying ML models and speed up the process of integrating them into production applications.

2022 Cybersecurity Risk Mitigation Roadmap For CISO & CIO As Business Drivers

As companies become aware of the need for data protection, their leaders are likely to increase the adoption of encryption; which will find its way into organizations’ basic cyber security architecture in 2022. This will have a ripple effect, and we can expect newer and updated applications providing data encryption solutions to be launched for businesses in the coming year. One of the most disruptive technologies in decades, blockchain technology will be at the heart of shifting from a centralized server-based internet system to transparent cryptographic networks. AI has matured from an experimental topic to mainstream technology. As a result, 2022 will see better accessibility of Artificial Intelligence (AI) based tools for creating robust cybersecurity protocols within an organization. In addition, we expect the new lineup of technology tools to be more cost-effective and yet more effective than ever before. Last but not least, 2022 will see a mix of remote work and on-site physical presence, thereby continuing with the trends of cybersecurity adapted during 2021. 

Intel reports new computing breakthroughs as it pursues Moore’s Law

Intel made the announcement at the IEEE International Electron Devices Meeting (IEDM) 2021. In the press release, Intel talked at length about its three areas of pathfinding and the breakthroughs that prove it’s on track to continue following its roadmap through 2025 and beyond. The company is focusing on several areas of research and reports significant progress in essential scaling technologies that will help it deliver more transistors in its future products. Intel’s engineers have been working on solutions to increase the interconnect density in chip packaging by at least 10 times. Intel also mentioned that in July 2022, at the Intel Accelerated event, it plans to introduce Foveros Direct. This will provide an order of magnitude increase in the interconnect density for 3D stacking through enabling sub-10 micron bump pitches. The tech giant is calling for other manufacturers to work together in order to establish new industry standards and testing procedures, allowing for the creation of a new hybrid bonding chiplet ecosystem.


Designing High-Volume Systems Using Event-Driven Architectures

Thanks to the latest development in Event-Driven Architecture (EDA) platforms such as Kafka and data management techniques such as Data Meshes and Data Fabrics, designing microservices-based applications is now much easier. However, to ensure these microservices-based applications perform at requisite levels, it is important to ensure critical Non-Functional Requirements (NFRs) are taken into consideration during the design time itself. In a series of blog articles, my colleagues Tanmay Ambre, Harish Bharti along with myself are attempting to describe a cohesive approach on Design for NFR. We take a use-case-based approach. In the first installment, we describe designing for “performance” as the first critical NFR. This article focuses on architectural and design decisions that are the basis of high-volume, low-latency processing. To make these decisions clear and easy to understand, we describe their application to a high-level use case of funds transfer. We have simplified the use case to focus mainly on performance.


How Hoppscotch is building an open source ‘API development ecosystem’

The Hoppscotch platform constitutes multiple integrated API development tools, aimed at engineers, software developers, quality assurance (QA) testers, and product managers. It includes a web client, which is pitched as an “online collaborative API playground,” enabling multiple developers or teams to build, test, and share APIs. A separate command line interface (CLI) tool, meanwhile, is designed for integrating automated test runners as part of CI/CD pipelines. And then there is the API documentation generator, which helps developers create, publish, and maintain all the necessary API documentation in real time. Hoppscotch for teams, which is currently in public beta, allows companies to create individual groups for specific use-cases. For example, it can create a team for its entire in-house workforce, where anyone can share APIs and related communications with anyone else. They can also create smaller groups for specific teams, such as QA testers, or for external vendors and partners where sensitive data needs to be kept separate from specific projects they are involved in.


Synthetic Quantum Systems Help Solve Complex Real-World Applications

Quantum Simulation is the most promising use of Pasqal’s QPU, in which the quantum processor is utilized to obtain knowledge about a quantum system of interest. It seems reasonable to employ a quantum system as a computational resource for quantum issues, as Richard Feynman pointed out in the 20th century. Neutral atom quantum processors will aid pure scientific discovery, and there are several sectors of application at the industrial level, such as the creation of novel materials for energy storage and transport, or chemical computations for drug development. “At Pasqal, we are not only scientists, we are not only academic, we industrialize our technology. By working with quantum technology, we want to build and sell a product which is reliable, and which helps to solve complex industrial problems in many contexts,” said Reymond. Among Pasqal’s customers is EDF, the French electricity utility. In the energy sector, Pasqal is working with EDF to develop innovative solutions for smart mobility.


Enterprise email encryption without friction? Yes, it’s possible

It is often (grudgingly) acknowledged in security circles that sometimes security must be partly sacrificed for better usability. But with Echoworx you can have the best of both worlds: a seamless, secure experience for organizations, their partners, vendors, and customers. “Our customers are mostly very large global enterprises in the finance, insurance, manufacturing, retail, and several other verticals,” says Derek Christiansen, Echoworx’s Engagement Manager. “When working with them, we must be sensitive to their needs. We do this by offering like-for-like encryption when we can, and by tailoring the integration of encryption to their existing flows.” ... The only thing that the customer needs to do is direct any email that needs encryption to the company’s infrastructure. “They can use their own data classification. That can be something as simple as an Office 365 rule. It’s also common to use a keyword (e.g., the word ‘secure’) in the email subject to route the message. We also have an optional plugin for Outlook that makes it really easy for senders,” Christiansen notes.


7 Critical Machine Intelligence Exams and The Hidden Link of MLOps with Product Management

The past 12 months have seen many machine learning operations tools gaining prominent popularity. Interestingly, one feature is notably absent or rarely mentioned in the discussion: quality assurance. Academia has already initiated research in machine learning system testing. In addition, several vendors provide data quality support or leverage data testing libraries or data quality frameworks. Automated deployment does exist as well in many tools. But how about canary deployments of models and whatever happened with unit and integration testing in the machine learning universe? Many of these quality assurance proposals originate from an engineering mindset. However, more and more specialists without an engineering background perform a lot of model engineering. Further, recall that a separate person or team frequently runs the quality assurance activities. Supposedly so that engineers can place their trust in others to catch mistakes. More cynical characters might insist that engineers need to be controlled and checked. 



Quote for the day:

"Leaders dig into their business to learn painful realities rather than peaceful illusion." -- Orrin Woodward

Daily Tech Digest - December 13, 2021

Rebranding Data Governance

To successfully implement managed self-service business intelligence at any non-trivial scale, you need data governance. To build and nurture a successful data culture, data governance is an essential part of the success. Despite this fact, and despite the obvious value that data governance can provide, data governance has a bad reputation. Many people – likely including the leaders you need to be your ally if you’re working to build a data culture in your organization – have had negative experiences with data governance in the past, and now react negatively when the topic of data governance is raised. ... Data governance, and building a data culture in general, is as much about people as it is about processes and technology, and that means effective communication is key. Effective communication requires a shared vocabulary, and a shared understanding of the meaning of key words. It may be time to think about rebranding. Not unlike how a corporation with a reputation for all the wrong things might change its name in an effort to leave all those negative connotations behind without really changing its ways, maybe we need to rebrand data governance… at least some of the time.


Crypto Banks: The intersection of traditional finance and DeFi?

It is important to note that crypto banks are very much centralized like traditional banks, something which flies in the face of the true crypto experience, predicated on decentralisation and a reduction in intermediaries. This means that the problems of a banking collapse, hacks and attacks can still impact a person's money and they are not achieving the financial independence that crypto offers. However, it also means that these crypto banks have regulatory standing and the ability to be compliant. They are legal entities that can be subject to laws and legislation; legislation that is constantly evolving to meet the changing of the times. At face value, it appears that these Digital Asset Institutions/"Crypto Banks", are appeasing those new to the space who are willing to join the early majority and move to a more crypto-focused financial way of life. Good UX, simple integrations with existing financial platforms, more financial options in a crypto context, and reliable regulation mean a massive open door for users to flood into if they are looking for this kind of experience.


Breakthrough Proof Clears Path for Quantum AI – Overcoming Threat of “Barren Plateaus”

“All hope of quantum speedup or advantage is lost if you have a barren plateau,” Cerezo said. The crux of the problem is a “vanishing gradient” in the optimization landscape. The landscape is composed of hills and valleys, and the goal is to train the model’s parameters to find the solution by exploring the geography of the landscape. The solution usually lies at the bottom of the lowest valley, so to speak. But in a flat landscape one cannot train the parameters because it’s difficult to determine which direction to take. That problem becomes particularly relevant when the number of data features increases. In fact, the landscape becomes exponentially flat with the feature size. Hence, in the presence of a barren plateau, the quantum neural network cannot be scaled up. The Los Alamos team developed a novel graphical approach for analyzing the scaling within a quantum neural network and proving its trainability. For more than 40 years, physicists have thought quantum computers would prove useful in simulating and understanding quantum systems of particles, which choke conventional classical computers. 


Great Engineering Teams Focus On Milestones Instead of Projects

The problem is that we aspire to unnecessary precision. We don’t need to know exact estimates to ensure engineering focuses on the most important work. The cost of producing exact estimates is wasteful. And it doesn’t even produce the outcome you’re looking for. You don’t need to pretend the feature will be done on June 20. This date is most certainly incorrect anyway. Milestones reduce the complexity of putting together high-level estimates. They give a shorthand that can be good enough for most decision-making, without the overhead of rigorous estimation. Occasionally, I see people succeed at rigorous estimation. But it’s rarely systemic – it’s usually one individual that is good at it. And it relies on them. If they go on vacation for a week, nobody is able to feed their model, and it collapses. While this is a great skill, to me it is the exception that proves the rule. Think of it this way: if one in twenty people can estimate in a high complexity situation, how many could be more successful in a less complex situation?


Better management through anthropology

Almost all business leaders and policymakers could benefit by asking the basic question that dogs anthropology: if a Martian was to land here suddenly and look around, what might they see?” writes Tett. In 1997, when GM was struggling to get teams of engineers from its small car group, its Saturn subsidiary, and Germany’s Opel to create a new vehicle together, the company called in anthropologist Elizabeth Briody to figure out what was wrong. Briody discovered the source of the dysfunction in differing cultural assumptions about meetings. The Opel team expected meetings to be agenda-driven, decision-making sessions; the small car group expected working sessions in which ideas would be shared and discussed; and the Saturn teams expected consensus-building sessions. As a result, the meetings were chaotic and frustrating for everyone. Unfortunately, Briody’s insights came too late, and GM’s senior product development leaders shut down the project. Lastly and most intriguingly, anthropology can help surface what Tett calls social silence. 


“Curiosity is a skill you can develop”: Microsoft UK’s National Technology Officer on driving innovation forwards

“The pandemic experience has been agreat example of how those values truly help our decision-making and our approach to markets. We faced unprecedented demand for our technologies because of the shift to home working. But we only have finite resources, so we worked out where to focus our attention and apply the most value. It was a strong statement, which came directly down from Satya, that we needed to ensure we are preserving life and that our technology was going to play a critical role.” Of the numerous case studies showing how Microsoft has used its pioneering technology to help with the response to the pandemic, Robinson offers two examples. First, Imperial College Healthcare NHS Trust, one of the largest in England, used mixed-reality HoloLens headsets on its COVID-19 wards to enable an entire team of specialists to join doctors virtually on the frontlines of the pandemic, keeping them safe as they helped patients with the virus. Additionally, Microsoft is driving artificial intelligence innovation at Alder Hey Hospital, a children’s hospital in Liverpool, and allowing doctors to make life-changing decisions remotely.


Overcoming Challenges to Automating DevSecOps

When you first move to security automation, you’ll find some security tasks that can easily be automated and others that are harder to fit in. DevSecOps teams will need to use many tools to cover all their bases, but no one wants teams checking lots of different tool outputs. Making it easier to see what’s going on and where risk lies by consolidating security tools and results in a central platform is the way to go. This makes life easier by giving a single pane of glass. Everyone generally agrees that DevOps teams must adopt cybersecurity best practices, but testing takes time. Development teams don’t have time to learn, use and check lots of security tools on top of their existing workload. Automated solutions get to work without any real need for maintenance or management. These solutions trigger tools to work at the right time depending on the outcomes found without any manual input needed. This gives you and your team valuable time back to focus on other important matters. The goal of CI/CD pipelines is fast delivery of build and release steps, typically through automation. 


Unused identities: A growing security threat

The first step towards taking control over your identities and assets’ authorizations is to know what you have. This starts with scanning across all your XaaS environments – that’s SaaS, IaaS, and PaaS – and taking an inventory of which identities have authorization to which assets. This involves ingesting the data from these different environments, normalizing the data into a workable model, and then correlating it with your identities from your identity provider (IDP) like Okta, Ping, Azure AD, or Google. The goal here is to understand the relationship between the identities and assets, assessing a wide range of factors including their usage and if they are right sized to meet the policies/needs of the organization. Are there identities with permissions to assets that have not been used in at least 60 days? This might be a good time to revoke those authorizations. But that’s just a surface-level case. Once you start analyzing your entitlements on a deeper level, you’ll begin to find that there are more permissions granted to your identities than you’re likely to want to admit.


Digital transformation: 4 CIO tips for 2022

To be more agile, innovation and development teams must be closer to the business and have the entire team involved in new product development, testing, and validation. This ensures that their priorities are met from the beginning and no time or resources are wasted. Performance and regression testing and security scanning must be baked in throughout, in real-time, to shorten the development cycle for maximum efficiency. In our organization, that meant moving to weekly sprints and release cycles for our internal infrastructure. Each week, our business reps prioritized what was most important and participated in the testing and validation process. As we worked to roll out our new partner portal, our marketing and support teams were involved from the very beginning, advising the development team and testing features and functionality. By bringing IT and business operations closer together, we’ve been able to balance innovation and operations without having to sacrifice either, even when talent or resources have been limited.


Report Dissects Conti Ransomware Attack on Ireland's HSE

The HSE attack began on March 18 from a malware infection on an HSE workstation - dubbed "Patient Zero Workstation" - as the result of a user clicking and opening a malicious Microsoft Excel file that was attached to a phishing email sent to the user on March 16. "After gaining unauthorized access to the HSE’s IT environment on March 18, the attacker continued to operate in the environment over an eight week period until the detonation of the Conti ransomware on May 14," the report says. "This included compromising and abusing a significant number of accounts with high levels of privileges, compromising a significant number of servers, exfiltrating data and moving laterally to statutory and voluntary hospitals." ... "There were several detections of the attacker’s activity prior to May 14, but these did not result in a cybersecurity incident and investigation initiated by the HSE and as a result, opportunities to prevent the successful detonation of the ransomware were missed."



Quote for the day:

"Leadership does not always wear the harness of compromise." -- Woodrow Wilson