Daily Tech Digest - April 28, 2023

CISOs Rethink Data Security With Info-Centric Framework

"Data is on a logarithmic curve; for every amount of data that I have next year, it's probably 2.5 times more than the amount of data I had this year," he says. "We're data hoarders, for lack of a better term; no one wants to get rid of people's information who have signed up to websites and forums and everything else, so we have this enormous data sprawl. That, in turn, leaves behind security blind spots." Further adding to the challenge is the fact that some data is of course more sensitive than other information, and some information doesn't need protecting at all, Rushing points out. And there's dynamism in terms of defining appropriate security levels as data ages. He uses a product launch to illustrate his point. "With a product release, we start off with a situation where no one knows about it, everything's embargoed, and you're protecting this important intellectual property," he explains. "And the next thing you know, it's released for public consumption. And it's suddenly not top secret anymore, in fact, you want the whole world to know about it."


How ‘Data Clean Rooms’ Evolved From Marketing Software To Critical Infrastructure

Data clean rooms as we know them today represent the first phase in leveraging “clean data.” User privacy is protected, while advertisers retain access to the necessary information. This model is now being extended and expanded upon in the enterprise. It is no longer about just protecting personal data. Companies need to act fast on data-derived insights, and therefore cannot compromise efficiency and collaborative abilities. They need truly comprehensive and dynamic data-sharing capabilities that can be quickly configured with little code and setup. ... As one of the key reasons for data clean rooms is the expanding IoT, businesses increasingly find themselves needing to demonstrate the provenance and veracity of their IoT data for business transactions or regulatory requirements. A data clean room must provide a single pane of glass for the trust and protection of IoT devices, the data they transmit and their data operations. This will require the need to authenticate IoT devices, protect the data as it travels from the device to the cloud and back to the device, and provide additional data points for audits.


ACID Transactions Change the Game for Cassandra Developers

For years, Apache Cassandra has been solving big data challenges such as horizontal scaling and geolocation for some of the most demanding use cases. But one area, distributed transactions, has proven particularly challenging for a variety of reasons. It’s an issue that the Cassandra community has been hard at work to solve, and the solution is finally here. With the release of Apache Cassandra version 5.0, which is expected later in 2023, Cassandra will offer ACID transactions. ACID transactions will be a big help for developers, who have been calling for more SQL-like functionality in Cassandra. This means that developers can avoid a bunch of complex code that they used for applying changes to multiple rows in the past. ... The advantage of ACID transactions is that multiple operations can be grouped together and essentially treated as a single operation. For instance, if you’re updating several points of data that depend on a specific event or action, you don’t want to risk some of those points being updated while others aren’t. ACID transactions enable you to do that.


Corporate boards pressure CISOs to step up risk mitigation efforts

The report also found that general misunderstandings in common cyber risk terminology could be a deterrent in developing effective strategies and communicating risk to company leadership. Cyberattacks have been increasing for several years now and resulting data breaches cost businesses an average of $4.35 million in 2022, according to an IBM report. Given the financial and reputational consequences of cyberattacks, corporate board rooms are putting pressure on CISOs to identify and mitigate cyber/IT risk. Yet, despite the new emphasis on risk management, business leaders still don’t have a firm grasp on how cyber risk can impact different business initiatives—or that it could be used as a strategic asset and core business differentiator. To better understand the current cybersecurity and IT risk challenges companies are facing, as well as steps executives are taking to combat risk, RiskOptics fielded a survey of 261 U.S. InfoSec and GRC leaders. Respondents varied in job level from manager to the C-Suite and worked across various industries.


What is the Spotify model in agile?

The Spotify model is just the autonomous scaling of agile, as hinted at in the paper’s name. It’s based on agile principles and unique features specific to Spotify’s organizational structure. This framework became wildly popular and was dubbed the “Spotify model,” with Henrik Kniberg credited as the inventor. ... Every other company wanted to adopt this framework for themselves. Spotify enjoyed a reputation for being innovative, and people assumed that if this framework worked so well for Spotify, it must also work great for them. Companies began to feel as if this framework was perfect, but nothing is perfect Spotify has changed its practices and ways of working over time — adapting its strategies and methodologies to changes in the market, user preferences, and more. The Spotify model itself was built with the company’s culture, values, and organizational structure in mind, with the ultimate goal of promoting cross-collaboration and innovation. As a result, it’s not a one-size-fits-all — the Spotify model was built around a foundation the company had already laid out.


Embracing zero-trust: a look at the NSA’s recommended IAM best practices for administrators

Knowing that credentials are a key target for malicious actors, utilizing techniques such as identity federation and single sign-on can mitigate the potential for identity sprawl, local accounts, and a lack of identity governance. This may involve extending SSO across internal systems and also externally to other systems and business partners. SSO also brings the benefit of reducing the cognitive load and burden on users by allowing them to use a single set of credentials across systems in the enterprise, rather than needing to create and remember disparate credentials. Failing to implement identity federation and SSO inevitably leads to credential sprawl with disparate local credentials that generally aren’t maintained or governed and represent ripe targets for bad actors. SSO is generally facilitated by protocols such as SAML or Open ID Connect (OIDC). These protocols help exchange authentication and autorization data between entities such as Identity Providers (IdP)’s and service providers. 


10 habits of people who are always learning new things

They’re the ones who are infinitely curious about the world around them – those who take things apart to find out how they work, or go on nature walks and prod everything with a stick, or do science experiments… Reading helps them stay informed about the world, learn from others’ experiences, and develop new perspectives. Whether it’s books, articles, blogs, or even social media, they make a habit of consuming content that feeds their mind and broadens their horizons. And they don’t just read books because they have to. No, they WANT to; they read just for pleasure and personal growth, across various genres and subjects. That’s why they have a well-rounded knowledge base and are very open-minded about other people’s perspectives! Just because you’ve gotten goal-setting down to an art doesn’t mean it’s all smooth sailing. Of course not. You’ll definitely be making mistakes. But mistakes don’t have to get you down. In fact, mistakes are perfect vehicles for learning, but only if you have a growth mindset.


How Security Leaders Should Approach a Challenging Budgeting Environment

Organizations need to understand that cybercriminals don’t care about the scope of the security controls. CIOs and CISOs cannot continue to operate in the dark without confidence about how well processes work; they need an understanding of what needs to be protected beyond the classical understanding of cybersecurity coverage. That means addressing cybersecurity from a business perspective. CISOs and CIOs can gain complete insight into the security posture and performance by converging tools like SIEM, SOAR, UEBA and business-critical security solutions, expanding the visibility beyond the IT infrastructure and into business-critical applications that contain invaluable information. A converged security solution can turn unqualified alerts into real, actionable intelligence by adding contextual information and automating responses. Another important thing to be mindful of is the pricing model for security solutions. Many are based on data volumes, which means the pricing is continuously increasing and unpredictable.


Is Web3 tech in search of a business model?

Changing their business models. I have worked with various startups in the distributed ledger technology space and what they were doing may have been revolutionising industries but all they were doing was replacing one technology with a new one without replacing the business model. The biggest challenge is how to use this new democratic power with its lower transaction costs and improved security to create new business models. The question is how to come up with such a commercial model? How can you monetise the system? In every other system, you monetise the system by creating a middleman. But the ultimate benefit of DLT is to do away with all middlemen. My fear is that all businesses will do if find a new way of creating new intermediaries using this technology. By definition, a business makes profits by adding value and that value is created through economies of scale and adding some form of brokerage in the process. The irony is that the whole purpose of this technology is to do away with the concept of the transaction, which is what capitalism is based on!


Defending Against the Evolving Infostealer Malware Threat

Flatley says employee education is very important in this space, as helping employees understand why following security policies is important will encourage compliance. “People are more apt to follow the rules if they understand the consequences. However, no amount of training will reduce this risk enough,” he says. That means security policies need to be enforced by technical means that are designed to prevent accidental or intentional non-compliance. “Even more important, we must understand that no amount of training or technical defenses will entirely stop this threat,” he says. Organizations must not only instrument a network to detect malicious activity and craft formal plans for remediating stolen identity information well in advance, but they must also practice them well so an attack can be acted upon quickly. Hazam points to education training tools such as simulated phishing attacks, which can help employees recognize and respond to real phishing emails, and gamified training programs, which can make the training more engaging and enjoyable for employees.



Quote for the day:

"Coaching isn't an addition to a leader's job, it's an integral part of it." -- George S. Odiorne

Daily Tech Digest - April 27, 2023

How can we build engagement in our organization’s data governance efforts?

The first thing to recognize is that establishing a data governance initiative is a change program—not a one-off project. Successful data governance programs change behaviors around how data is used, and changing behaviors takes time. Top-down impositions of data governance based on theory and text-heavy policies often fail to build engagement because they are detached from organizational context. The most successful transformations we have seen are the result of an organic development of data governance from organization and culture. This requires intentional communication, iteration, and open feedback based on listening to stakeholders and users. Communicate the benefits of data governance by emphasizing the positive impact the program can have on your organization’s ability to achieve its strategic objectives, such as improving decision-making, enhancing data quality, and ensuring regulatory compliance. Organizations must be willing to accept that there will be challenges and pushback to the program. 


The State of Organizations 2023: Ten shifts transforming organizations

‘True hybrid’: The new balance of in-person and remote work. Since the COVID-19 pandemic, about 90 percent of organizations have embraced a range of hybrid work models that allow employees to work from off-site locations for some or much of the time. It’s important that organizations provide structure and support around the activities best done in person or remotely. ... Closing the capability chasm. Companies often announce technological or digital elements in their strategies without having the right capabilities to integrate them. To achieve a competitive advantage, organizations need to build institutional capabilities—an integrated set of people, processes, and technology that enables them to do something consistently better than competitors do. ... Walking the talent tightrope. Business leaders have long walked a talent tightrope—carefully balancing budgets while retaining key people. In today’s uncertain economic climate, they need to focus more on matching top talent to the highest-value roles. McKinsey research shows that, in many organizations, between 20 and 30 percent of critical roles aren’t filled by the most appropriate people.


How prompt injection can hijack autonomous AI agents like Auto-GPT

A new security vulnerability could allow malicious actors to hijack large language models (LLMs) and autonomous AI agents. In a disturbing demonstration last week, Simon Willison, creator of the open-source tool datasette, detailed in a blog post how attackers could link GPT-4 and other LLMs to agents like Auto-GPT to conduct automated prompt injection attacks. Willison’s analysis comes just weeks after the launch and quick rise of open-source autonomous AI agents including Auto-GPT, BabyAGI and AgentGPT, and as the security community is beginning to come to terms with the risks presented by these rapidly emerging solutions. In his blog post, not only did Willison demonstrate a prompt injection “guaranteed to work 100% of the time,” but more significantly, he highlighted how autonomous agents that integrate with these models, such as Auto-GPT, could be manipulated to trigger additional malicious actions via API requests, searches and generated code executions. Prompt injection attacks exploit the fact that many AI applications rely on hard-coded prompts to instruct LLMs such as GPT-4 to perform certain tasks. 


Agility and Architecture

When making architectural decisions, teams balance two different constraints:If the work they do is based on assumptions that later turn out to be wrong, they will have more work to do: the work needed to undo the prior work, and the new work related to the new decision. They need to build things and deliver them to customers in order to test their assumptions, not just about the architecture, but also about the problems that customers experience and the suitability of different solutions to solve those problems. No matter what, teams will have to do some rework. Minimizing rework while maximizing feedback is the central concern of the agile team. The challenge they face in each release is that they need to run experiments and validate both their understanding of what customers need but also the viability of their evolving answer to those needs. If they spend too much time focused just on the customer needs, they may find their solution is not sustainable, but if they spend too much time assessing the sustainability of the solution they may lose customers who lose patience waiting for their needs to be met.


Beginning of the End of OpenAI

Maybe OpenAI was not anticipating its success with ChatGPT technology back then. Now, the explanation for the trademark application can be just so that no one clones the company makes the most sense currently. Or maybe not. Maybe the Sam Altman led company has bigger plans. The company had already registered with AI.com to redirect it to ChatGPT — a pretty strong statement. Well, now that the AI arms race is in full glory, there might be something that Google can do as well to catch up. Up until now, Google made strides by improving its technology, but it might have another trick up its sleeve. If OpenAI files for a trademark on ‘GPT’, which is more than just a product name, but a name of technology, and the USPTO accepts it or even considers it, the application will be moved for an ‘opposition period’. ... OpenAI may be getting a bit too possessive about their products. GPT stands for Generative Pre-trained Transformers and interestingly, ‘Transformer’ was introduced by Google in 2017 as a neural network architecture, for which the company has also filed a patent.


Macro trends in the tech industry

Managing tech debt and maintaining system health are essential for the long-term success of any product or system. Tech debt has beenin the news cycle over the last six months, but it’s certainly not a new concept. We’re happy that it’s being discussed, but ultimately managing tech debt is not rocket science: good product managers and tech leads should already be considering cross-functional requirements, including tech debt management. Fitness functions can identify and measure important quality characteristics, and we can describe tech debt in terms of how it may improve those characteristics. ... As low-code and no-code platforms continue to evolve and mature — and especially because these tools are likely to be augmented with AI enabling them to produce applications faster or for less expert users — we decided to reiterate our advice around bounded low-code platforms. We remain skeptical because the vendor claims around these tools are, basically, dangerously optimistic. There are no silver bullets and a low-code platform should always be evaluated in context as a potential solution, not used as a default option.


7 venial sins of IT management

First of all, comparing the two, being a business person is easier. Second of all, unless you think the company’s CFO should be a business person, not a finance person, and that the chief marketing officer should be a business person and not a marketeer, the whole thing just isn’t worth your time and attention. But since I have your attention anyway, here’s the bad news about the good news: CIOs who try to be business people instead of technology people are like the high school outcasts who are desperately trying to join the Cool Kids Club. They’ll still be excluded, only now they’ve added being pathetic to their coolness deficit. ... Product management is the business discipline of managing the evolution of one of a company’s products or product lines to maintain and enhance its marketplace appeal. IT product management comes out of the agile world, and has at best a loose connection to business product management. Because while there is some limited point in enhancing the appeal of some chunk of a business’s technology or applications portfolio, that isn’t what IT product management is about.


UK government introduces Digital Markets Bill to Parliament

CMA chief executive Sarah Cardell welcomed the Bill and the powers it granted to the competition regulator. “This has the potential to be a watershed moment in the way we protect consumers in the UK and the way we ensure digital markets work for the UK economy, supporting economic growth, investment and innovation,” she said. “Digital markets offer huge benefits, but only if competition enables businesses of all shapes and sizes the opportunity to succeed,” said Cardell. “This Bill is a legal framework fit for the digital age. It will establish a tailored, evidenced-based and proportionate approach to regulating the largest and most powerful digital firms to ensure effective competition that benefits everyone.” She added that the CMA will support the Bill through the legislative process, and that it stands ready to use these powers once it has been approved by Parliament. Baroness Stowell, chair of the House of Lords Communications and Digital Committee, which called for the creation of a new digital regulator like the DMU in March 2019, said the Bill is about ensuring a level playing field in digital markets.


Spring Cleaning the Tech Stack

As a company matures, part of the natural process is accumulating a plethora of applications along the way, which then requires IT to routinely evaluate to eliminate waste. Richard Capatosto, IT manager at Backblaze, explains IT spends a lot of time and energy tracking down, identifying, and operationalizing these “rogue” applications. “They are typically very inefficient to support for several reasons,” he says. “First, they are sometimes one-off apps which were purchased outside of our enterprise applications stack and may not have enterprise-level security.” Usually in those instances, they’ve been purchased outside of normal processes (e.g., on credit cards), which creates further downline work. “Second, these applications often do not support enterprise SSO and provisioning, which is key to maintaining efficient and secure IT operations,” he says. Eliminating or upgrading these applications reduces unnecessary spend, conforms to security best practices, and lets the IT team provide guidance about better tech-based workflows based on existing and potential applications.


Generative AI and security: Balancing performance and risk

From a security perspective, it’s both appealing and daunting to imagine an ultra-smart, cloud-hosted, security-specific AI beyond anything available today. In particular, the sheer speed offered by an AI-powered response to security events is appealing. And the potential for catastrophic mistakes and their business consequences is daunting. As an industry observer, I often see this stark dichotomy reflected in marketing, like that of the recently-launched Microsoft Security Copilot. One notices Microsoft’s velocity-driven pitch – “triage signals at machine speed” and “respond to incidents in minutes, instead of hours or days.” But one also notices the cautious conservatism of the product name: it’s not a pilot, it’s merely a copilot. Microsoft doesn’t want people getting the idea that this tech can, all by itself, handle the complex job of creating and executing a company’s cybersecurity strategy. That, it seems to me, is the approach we should all be taking to these tools, while carefully considering what type of data can and should be fed to these algorithms. 



Quote for the day:

"Time is neutral and does not change things. With courage and initiative, leaders change things." -- Jesse Jackson

Daily Tech Digest - April 26, 2023

How to vet your vendors: Ensuring data privacy and security compliance

Equally as important is ensuring that the vendors actually adhere to regulatory requirements and checking what data privacy infrastructure and security measures they have in place. Do they employ permission and user access controls, employee security awareness, patch management, system configuration management and periodic penetration testing? How do they handle data subject concerns? Do they notify new data subjects? Is there an opt-in/opt-out feature? Are databases accurate, and are they updated regularly based on customer feedback and privacy requests? ... Finally, ask about the organization’s overall mindset and handling of data security and privacy. Have they made it a priority across their organization? Do ALL employees receive data and privacy-related training, even if the entire team doesn’t work on those issues directly? A third-party partner that goes above and beyond in this capacity will make for a more reliable and proactive partner across the board.


Z Energy’s CDO: ‘First trust, then transform’

My view on transformation—digital transformation, in particular—is we’re moving toward an endpoint. Lots of people will say it’s ever-changing, and I agree that, from a technology point, it is. But to me, the endpoint is an agile organization, and I don’t mean agile as in the way we think about doing work, but a nimble organization. If you can transform your organization to the point where it’s able to rapidly respond to whatever happens, then that’s the transformation. So, is there an endpoint to that? There are always tweaks along the way, but you can see organizations move from being static to being able to deal with whatever comes at them. That’s relevant to us at Z, because you could say, “In 40 years’ time, there’s no future in hydrocarbons.” That might happen in 10 years or 100 years. I have no idea which of those is true, and I have to be ready for all of them. We also don’t know what the replacements are going to be. Are we looking at electricity, hydrogen? What’s the role of biofuels here? All of those things are rapidly changing. The Prime Minister actually just announced that the biofuels mandate is now going to be cancelled, so how do we respond to that?


Can this new prototype put an end to cyberattacks?

The new prototype, called the Arm Morello Evaluation Board, aims to put an end to this. It is based on the CHERI (capability hardware enhanced RISC instructions) instruction set architecture, which was developed by Cambridge University and SRI International. It is compartmentalized to ensure that any breaches remain confined to a particular aspect, rather than spreading throughout the whole system. This is just one of the scenarios where CHERI's memory-safe features come in handy. Access to the technology was facilitated by the Digital Security by Design (DSbD), a government-backed initiative that aims to improve the safety of the UK's digital landscape. Although it is still in the research phase, the prototype is claimed to have the potential to help protect industries and firms. already, the programme has racked up over a thousand days in development work wot other 13 million lines of code being experimented with. There will also be a new round of experiments starting from May 25, which will explore porting the Morello platform, as well as how the CHERI architecture can secure applications against memory flaws and whether code can be improved by highlighting errors and vulnerabilities.


Don’t Let Time Series Data Break Your Relational Database

Time series is all about understanding the current picture of the world and offering immediate insight and action. Relational databases can perform basic data manipulation, but they can’t execute advanced calculations and analytics on multiple observations. Because time series data workloads are so large, they need a database that can work with large datasets easily. Apache Arrow is specifically designed to move large amounts of columnar data. Building a database on Arrow gives developers more options to effectively operate on their data by way of advanced data analysis and the implementation of machine learning and artificial intelligence tools such as Pandas. Some may be tempted to simply use Arrow as an external tool for a current solution. However, this approach isn’t workable because if the database doesn’t return data in Arrow format right from the source, the production application will struggle to ensure there’s enough memory to work with large datasets. The code source will also lack the compression Arrow provides. 


When cloud pros fumble office politics

The adoption of cloud services can create tension between early adopters and those who are resistant to change. Early adopters may feel frustrated by the resistance of others, while those who are resistant may feel excluded from decision-making processes and overwhelmed by the pace of change. The fix here is education and empathy. I’m often in the middle between factions that both feel threatened by the pace of cloud adoption. One group believes that it’s too fast; the other believes it’s too slow. Both sides need to hear each other out and adapt a pace that seems reasonable—and more importantly, that returns the most value back to the business. ... Cloud services can raise concerns about security and privacy, particularly in industries that store sensitive data. Employees may be worried about the security of their personal data, while IT departments may be stressed about the security of company data stored in the cloud. Of course, cloud-based security has been better than traditional security for some time now. But that’s not the perception, and you’re dealing with perceptions, not realities.


Where did Microservices go

One of the most significant hurdles is conducting transactions across multiple services. Although there are several methods for handling distributed transactions, such as the two-phase commit protocol, compensating transactions, event-driven architectures, and conflict-free replicated data types, none of them can provide the same simplicity that developers enjoy in a monolithic architecture with a database that offers transaction functionality. When things go wrong in a distributed system, data inconsistency can arise, which is perhaps the worst problem a developer wants to deal with. ... Serverless computing is actually an evolution of Microservices architecture instead of a replacement. Both approaches share the same goal of breaking down monolithic applications into smaller, more manageable components. However, while microservices typically involve deploying each service to a separate container or instance, serverless computing allows developers to focus solely on the code for individual functions, without worrying about the underlying infrastructure.


How AI Can Transform The Software Engineering Process

Architecture definition - As far as app architecture goes, AI cannot evaluate the trade-offs between different architectural decisions. So it will still rely on the intuition and experience of a senior developer for the most part. Nevertheless, AI can drill down the architecture by suggesting relevant services from public cloud providers or calculating the TCO of the target architecture. Coding - Writing code is one of the areas that will definitely benefit from AI. For example, when using Bing AI, the role of senior engineers will be to verify and polish the code since the tool still makes mistakes. A new method for developing code will be applied widely: prompt engineering. It will be used for generating code snippets based on given prompts, facilitating prototyping and iterating on different ideas. Unit tests. Since unit tests are typically automated, they are one of the areas where AI will be most useful. For example, CodeWhisperer does an excellent job at automating unit tests.


Welcome to the postmodern enterprise architecture era

Postmodern enterprise architecture is geared toward the computer science world as we understand it today. The talent pool has greatly expanded, and while there are still talent shortages, the ability to build and retain a high-performing team is within any company's grasp. The software and hardware building blocks have greatly matured; computing environments can be set up or resized in minutes, and complex user experiences can be built out of commodity parts. The wall between the business and engineers is crumbling, with cross-functional agile teams working together to incrementally improve with each (anytime you need to) release. Instead of systems, we are thinking more and more about platforms that both architects and our business partners can adapt for use in the latest customer experience. In this postmodern world, we need an enterprise architecture function that is built for today. Good news: We don't have to start from scratch. We have developed many great practices and utilities on the journey to modern enterprise architecture, and now we must consider how to use those tools cost-effectively.


Clocking out: Millennials and the workforce

In perhaps the finest section of Saving Time, Odell comes across an “embarrassingly spot-on characterization” of her own life in an academic paper. The sociologist Hartmut Rosa sketches out the life and habits of a fictitious professor named Linda. Linda has a job and some means, but she feels she is chronically busy, “always falling short and running behind” her various commitments. It is possible to be genuinely ensnared by a lack of time—there are those who have to work multiple jobs to pay the rent while also raising children—but Rosa argues that Linda’s predicament is self-generated. According to Odell’s analysis, Linda sees herself as “controlled and surveilled” by society’s expectation that she be busy and productive at all times, by what Rosa neatly calls the “logic of expansion.” This concept has been so thoroughly ingrained that it has been adopted even by those with plenty of agency. This analysis is squeezed into the barnstorming first half of Saving Time. 


9 Questions for IT Leaders to Ask About Cloud Cybersecurity

Visibility and context are two of the top challenges in cloud cybersecurity, according to Rick McElroy, principal cybersecurity strategist at cloud computing company VMware. “Who is logging in to what and when? Who is uploading private documents to public file shares? How can I follow an identity around a multi-cloud environment to determine if it is doing something malicious? Is this PowerShell script something my system administrators are using or is it part of a ransomware attack?” he asks. “These are all hard questions to answer for teams today.” Amit Shaked, co-founder and CEO of multi-cloud data security platform Laminar, warns about the increase in unknown or “shadow data.” “Data scientists and developers can now proliferate data in just a few clicks with agile cloud services,” he explains. “As a result, it's become easier than ever before for IT and security teams to lose sight of this data.” Bringing together teams that have historically worked in siloes can help to increase cloud visibility and teams’ ability act on security needs.



Quote for the day:

"You either have to be first, best, or different." -- Loretta Lynn

Daily Tech Digest - April 24, 2023

Is Strategic Thinking Dead? 5 Ways To Think Better For An Uncertain Future

Strategic thinking is distinguished from tactical thinking because it takes a longer view rather than reacting to events as they happen. It pushes you to be proactive in your actions, rather than reactive. And even in the addressing the immediate, strategic thinking can actually increase your effectiveness—because your advanced planning will have given you the opportunity to explore potential situations, assess responses and judge outcomes—and these can prepare you for how you react when you have less runway. ... One of the hallmarks of a strategic thinker is clarity of purpose. Be sure you’re clear about where you want to go—as an individual, a team or a business. Know your true north because it will help you choose wisely among multiple options. The language you choose to describe where you want to be (or how you understand a challenge) will constrain or create possibilities, so also be careful about how you describe your intentions. If your purpose is to unleash human potential for students, that will likely take you farther than a goal to simply provide great classroom experiences. 


Why Backing up SaaS Is Necessary

Looking at the possibilities to protect their data on those SaaS-platforms, organisations started to quickly realise that their SaaS solutions were not as protected as their other applications run in their own datacentre or their private cloud. Companies that did know that fact had to put up with it as the product forced them to use it as it was. Users had to learn the hard way that most SaaS solutions have a shared responsibility model where the customer is responsible for his or her own data. ... Even more critically, it’s important to ensure backups are stored in an independent cloud dedicated to data protection and not dependent on one of the large hyperscalers. A third-party cloud gives total control over backed up data and can easily ensure three to four copies are always made and reside in multiple locations. By retaining SaaS data in an independent backup-focused cloud, customers can also avoid the egress charges that come part and parcel with the public cloud. These extra charges often result in surprise bills after data restores and make it difficult to budget.


7 steps to take before developing digital twins

Leaders in any emerging technology area look for stories to inspire adoption. Some should be inspirational and help illustrate the art of the possible, while others must be pragmatic and demonstrate business outcomes to entice supporters. If your business’s direct competitors have successfully deployed digital twins, highlighting their use cases often creates a sense of urgency. ... Harry Powell, head of industry solutions at TigerGraph, says, “When creating a digital twin of a moderately sized organization, you will need millions of data points and relationships. To query that data, it will require traversing or hopping across dozens of links to understand the relationships between thousands of objects.” Many data management platforms support real-time analytics and large-scale machine learning models. But digital twins used to simulate the behavior across thousands or more entities, such as manufacturing components or smart buildings, will need a data model that enables querying on entities and their relationships


Enterprise Architecture Management (EAM) in digital transformation

The point is to accompany these “things” throughout their entire life cycle on the basis of a coherent technology vision, to recognise innovation potential, to identify technology risks, to derive a technology strategy. And often EAM already fails because of this corporate language, because the mostly abstract orders, including abstract or economic business language, come directly from the board and “have to be implemented”. There is usually no budgeting, because “everyone has to participate”. This is the reality, and EAM is ground between the board and development and operations. ... What could be a benefit of EAM? You always have to think about this question in the context of your own company! A TOGAF copy of the EAM goals or principles is not helpful, e.g. “The primary goal of EAM is cost reduction”. Has never worked. Yes, it may be that costs can be reduced. But EAM always brings more quality, and the cost savings are not accounting, they always go straight into new methods or procedures: a better overview of the applications enables projects to start faster, the time gained and the less effort is immediately put into sensible other efforts.


Online Safety Bill could pose risk to encryption technology used by Ukraine

The Online Safety Bill will give the regulator, Ofcom, powers to require communications companies to install technology, known as client-side scanning (CSS), to analyse the content of messages for child sexual abuse and terrorism content before they are encrypted. The Home Office maintains that client-side scanning, which uses software installed on a user’s phone or computer, is able to maintain communications privacy while policing messages for criminal content. But Hodgson told Computer Weekly that Element would have no choice but to withdraw its encrypted mobile phone communications app from the UK if the Online Safety Bill passed into law in its current form. Element supplies encrypted communications to governments, including the UK, France, Germany, Sweden and Ukraine. “There is no way on Earth that any of our customers would every consider that setup [client-side scanning], so obviously we wouldn’t put that into the enterprise product,” he said. “But it would also mean that we wouldn’t be able to supply a consumer secure messaging app in the UK. ...” he added.


The biggest data security blind spot: Authorization

When authorization is overlooked, companies have little to no visibility into who is accessing what. This makes it challenging to track access, identify unusual behavior, or detect potential threats. It also leads to having “overprivileged” users – a leading cause of data breaches according to many industry reports. Authorization oversight is critical when employees leave a company or change roles within the organization, as they might retain access to sensitive data they no longer need. If access rights never expire, unauthorized users have access to sensitive data. And with layoffs, the risk of data theft increases. The lack of proper authorization also puts companies at risk of non-compliance with privacy laws like the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), which can result in significant penalties and reputational damage. Most organizations store sensitive data in the cloud, and the majority do so without any kind of encryption, making proper authorization all the more necessary.


AI can write your emails, reports, and essays. But can it express your emotions? Should it?

What do we lose when we outsource expressing our emotions to an AI chatbot? We've all heard that sitting on our emotions and feeling them is how we process them and get the intensity to pass. Speaking from the heart about a complex, heavy topic is one way we can feel true catharsis. AI can't do that processing for us. There's a common theme during periods of technological innovation that technology is supposed to do the mundane, annoying, dangerous, or insufferable tasks that humans hate doing. Many of us would sometimes prefer to avoid emotional processing. But experiencing complex emotions is what makes us human. And it's one of the few things an AI model as advanced as ChatGPT can't do. If you think of expressing emotions as less of an experience and more of a task, it might seem clever to automate them. But you can't conquer human emotions by passing the unsavory parts of them to a language model. Emotions are critical to the human experience, and denying them their place within yourself can lead to unhealthy coping mechanisms and poor physical health.


Benefits of data mesh might not be worth the cost

Data mesh might be a good framework for businesses that acquire companies but don't consolidate with them, thus wanting a decentralized approach to most or even all of the individual companies' data, Thanaraj said. It might also be a good option for large organizations that operate in multiple countries. These organizations' leaders might want to -- and are sometimes required to -- maintain local data autonomy. "That's where I see data mesh being a much more appropriate data architecture to apply," Thanaraj said. Still, questions remain about the long-term value of data mesh. In fact, Gartner labeled data mesh as "obsolete before plateau" in its 2022 "Hype Cycle for Data Management." Moreover, organizations could more readily use other better-defined and more easily implemented approaches to improve their data programs, Aiken said. Organizations have DataOps, existing data management frameworks and data governance practices at their disposal. If a data program doesn't follow best data management practices, data mesh won't improve it. "Those improvements could be achieved by other practices that don't have a buzz around them like data mesh," he said.


Do the productivity gains from generative AI outweigh the security risks?

In short, using generative AI to code is dangerous, but its efficiencies are so great that it will be extremely tempting for corporate executives to use it anyway. Bratin Saha, vice president for AI and ML Services at AWS, argues the decision doesn’t have to be one or the other. How so? Saha maintains that the efficiency benefits of coding with generative AI are so sky-high that there will be plenty of dollars in the budget for post-development repairs. That could mean enough dollars to pay for extensive security and functionality testing in a sandbox — both with automated software and expensive human talent — and the very attractive spreadsheet ROI. Software development can be executed 57% more efficiently with generative AI — at least the AWS flavor — but that efficiency gets even better if it replaces les experienced coders, Saha said in a Computerworld interview. “We have trained it on lots of high-quality code, but the efficiency depends on the task you are doing and the proficiency level,” Saha said, adding that a coder “who has just started programming won’t know the libraries and the coding.”


The staying power of shadow IT, and how to combat risks related to it

The problem, when it comes to uncovering shadow IT, is that information about what applications exist and who has access to them is spread across a company, in many different silos. It lives in the files of sometimes hundreds of business application owners – end-users in marketing, sales, customer service, finance, HR, product development, legal and other departments who acquired the applications. How do most organizations go about finding this data? They send emails, Microsoft Teams or Slack messages to employees asking them to notify IT if they have purchased or signed up for a free app, and who they’ve given access to (and hope everyone will respond). Then IT manually inputs any information they get into a spreadsheet. ... The data must be automatically and continuously collected and normalized. It must be made available to all SaaS management stakeholders, from the people who own and must therefore take responsibility for managing their apps, to IT leaders and admins, IT security teams, procurement managers, and more.



Quote for the day:

“Unless we are willing to go through that initial awkward stage of becoming a leader, we can’t grow.” -- Claudio Toyama

Daily Tech Digest - April 23, 2023

Shadow IT, SaaS Pose Security Liability for Enterprises

All issues surrounding shadow IT can be traced back to an organization's lack of visibility. An unmanaged software stack gives IT teams zero insight into how sensitive company information is being used and distributed. Since these tools are not vetted properly and are left unmonitored, the data they store is not adequately protected by most organizations. This creates the perfect framework for hackers to easily seize important data, such as confidential financial records or personal details. Critical corporate data is at risk because most, if not all, SaaS tools require corporate credentials and access to an organization's internal network. A recent survey by Adaptive Shield and CSA actually shows that in the past year alone, 63% of CISOs have reported security incidents from this type of SaaS misuse. As stated prior, the recurring theme that many businesses are experiencing with shadow IT is the risk associated with a data breach. However, it is equally important to realize the potential industry scrutiny that businesses face and the penalties they receive from regulators because of sprawling shadow IT.


The Cyber Resilience Act Threatens Open Source

At the heart of the issue is the need for organizations to self-certify their compliance with the act. Since open source is often maintained by a small loose-knit group of contributors, it is difficult to see how this will work. Here’s the concern in a nutshell. Suppose you write up a cool little C++ program for your own use. You aren’t a company, and you didn’t do it for profit. Wanting to share your work, you post your program on GitHub with an open source license. ... In fact, it is even encouraged. That’s how open source works. The problem is when the GRID database has a problem that causes a data breach. The problem turns out to be a vulnerability in your code. Under the proposed law, it is possible you’d be left holding the bag for a large sum of money thanks to your generous hobby project that didn’t earn you a cent. The situation is even more complex if your code has multiple contributors. Was it your code that caused the breach or the other developer’s code? Who “owns” the project? Are all contributors liable? 


Why Your Personal Brand Needs A Niche: The Benefits Of Specialization

Finding your niche also allows you to focus your energy and resources on a specific area, reducing the chances of you feeling overwhelmed trying to be everything to everyone. A niche provides a compass for your efforts, ensuring that the work you do aligns with your skills and interests. While being more specific can feel uncomfortable, it ultimately enables employers and clients to understand the specific value you offer. In the early days of my consultancy, I found myself saying yes to everything, including some speaking engagements that fell outside of my immediate area of expertise or taking on clients who demanded a lot of additional effort on my part to cover the entire scope of the services they sought that went beyond my offerings. Over time, I defined clearer boundaries around my scope of services. I also tried to more explicitly communicate which services I did not offer or consider within my area of expertise. When you niche down and clearly define your area of focus, it enables you to make clearer career choices, only pursuing opportunities that allow you to reinforce your positioning.


Former Microsoft CIO Jim DuBois Dishes On AI and Future of IT

One of the things we have to figure out in the future of work is that a huge part of the population isn’t able to take advantage of this hybrid and remote opportunity. And what do we do for them? Do we end up getting to a place where people are picking jobs based on whether they can work remote or not? And are we going to have to compensate people differently for being on- or off site? That’s something that hasn’t been solved … There are a lot of companies that haven’t figured out how to keep the collaboration and the culture going in a remote workforce. So they just said, “Oh, we’ve got to get people back into the office do that.” I would say, “Or, you could figure out how to collaborate and keep your culture going with remote.” ... I’m a believer in carrot rather than stick incentives. Rather than compliance requirements, we need to focus on the fact that there’s so much value in ESG and in having a more diverse team. We need to focus more on the incentives and less on the “because we told you to” part. 


Using generative AI to understand customers

In terms of better understanding customers, generative AI is really effective in summarising information. Companies are already using the technology to create auto-summaries of market research reports, eliminating the need for having to precis reports manually. Going forward, there is potential to expand this use case to summarise large volumes of information quickly and efficiently in order to provide concise answers to key business questions. ... Generative AI can also make it easier for all stakeholders to access market research without having to involve an insights manager each time, thereby removing access barriers and facilitating the seamless integration of consumer insights into daily operations. Moreover, generative AI can help to address common concerns associated with all stakeholders accessing market research, such as non-research workers asking the wrong questions. By prompting relevant questions related to their search query, the technology can help those without research backgrounds to ask better questions, ultimately leading to more accurate and useful customer information.


Optimizing SaaS With Automation and Zero-Touch IT

While it may seem daunting, the journey to achieving zero-touch IT is not out of reach. It does require investment in time, technology and people, however. And once you get there the efficiencies will be apparent. Let’s break these benefits down by category. Zero-touch IT helps companies manage their software applications much more effectively. IT groups have historically gotten bogged down in the manual execution of tasks that are complicated and tedious, despite being basic and common. Two processes cited as top concerns for IT professionals, onboarding new employees and offboarding departing employees, are concrete examples. But managing the user life cycle of an employee doesn’t just start at onboarding and stop at offboarding. Many changes take place during an employee’s time at the organization—promotions, changes in departments, password resets, new project assignments, etc. And every single time an event like this occurs, some type of action, like giving or revoking access to new files, elevating access rights or taking security steps to prevent unauthorized access is required. 


Cyber insurer launches InsurSec solution to help SMBs improve security, risk management

InsurSec solutions are new, emerging offerings, but the concept behind them and its potential to add value to involved parties is something being recognized more widely, particularly for SMBs and organizations struggling with an adverse blend of low maturity and cost constraints. “I think the insurance market is recognizing that their future offering in this space has to grow beyond simple loss protection,” Paul Watts, distinguished analyst at the Information Security Forum, tells CSO. “Providing complementary services to help organizations with proactive and reactive management of cyber risk could also help foster stronger relationships between insurer and client.” Both parties stand to benefit here – by engaging in this way, risk is better (and jointly) managed, Watts says. Insurers are mitigating losses, and clients are drawing down on capabilities that were previously too expensive for consideration and could see lower premiums as a result. 


Novel Technique Exploits Kubernetes RBAC to Create Backdoors

Researchers at Cybersecurity firm Aqua Security said they recorded and analyzed an attack on its Kubernetes honeypots that used the RBAC system to gain persistence. Kubernetes Role-based access control or RBAC is a method of restricting network access based on the roles of individual users within an organization. In their honeypots, the researchers exposed AWS access keys in various locations on the cluster and received a beacon indicating that the access keys were used by the attacker to try and gain further access to the cloud service provider account and leverage the attack to steal more resources and data. "The findings are significant as they shed light on the risks of misconfigurations and how even large organizations can overlook the importance of securing their clusters, leaving them vulnerable to potential disasters with just one mistake," according to researchers. The large-scale campaign dubbed RBAC Buster allowed attackers to gain initial access by exploiting a misconfigured API server that allowed unauthenticated requests from anonymous users with privileges.


How does blockchain fit into today’s enterprise?

According to Bennett, outside of the financial services sector, “we are still not at the point where we can confidently say that blockchain really is delivering the business value that people are looking for, simply because it is incredibly difficult to actually set up a blockchain network that at the end of the day really needs all those blockchain features,” she said. Stack Overflow recently conducted a survey to find out what new technologies made it past what Gartner refers to as the hype cycle. Many new technologies can stir up excitement in the industry, but not all will actually see widespread adoption. They ranked technologies on a scale of experimental to proven and positive to negative impact. On a scale from zero (experimental) to 10 (proven), blockchain technology came in towards the middle at 4.8. And on a scale from zero (negative impact) to 10 (positive impact), it received a score of 5.3. Another survey by Foundry echoes these sentiments. It found that 51% of respondents were not interested in adopting blockchain technology within their organization.


Navigating The Future Of Cyber

Cyber is about more than protecting information—risk management, incident response planning and threat intelligence can often be directly correlated to increasing trust within businesses. Many organizations recognize the importance of prioritizing cybersecurity and have reported significant improvements in trust and efficiency through their efforts. In Deloitte Global’s latest Future of Cyber Survey, almost 70% of businesses that were identified as highly mature organizations when it comes to cyber believe cybersecurity has positively impacted their organization's reputation and productivity. From robust cyber planning across the business and effective board-level engagement—the high cyber performers recognize the importance of cyber responsibility and involvement across the whole organization. Beyond looking across the organization, cyber planning strategies should be regularly reviewed and updated to protect trust in the organization.



Quote for the day:

"Without courage, it doesn't matter how good the leader's intentions are." -- Orrin Woodward

Daily Tech Digest - April 22, 2023

What CIOs need to become better enablers of sustainability

Key to this is a greater understanding of business operations and their production of CO2, or use of unsustainable practices and resources. As with most business challenges, data is instrumental. “Like anything, the hard work is the initial assessment,” says CGI director of business consulting and CIO advisor Sean Sadler. “From a technology perspective, you need to look at the infrastructure, where it’s applied, how much energy it draws, and then how it fits into the overall sustainability scheme.” CIOs who create data cultures across organizations enable not only sustainable business processes but also reduce reliance on consultancies, according to IDC. “Organizations with the most mature environmental, social, and governance (ESG) strategies are increasingly turning to software platforms to meet their data management and reporting needs,” says Amy Cravens, IDC research manager, ESG Reporting and Management Technologies. 


How to implement observability in your IT architecture

Although it has grown out of the APM market, observability is more than just APM with a new name and marketing approach. The most crucial factor differentiating observability from APM is that observability includes three distinct monitoring approaches—tracing, metrics, and logs—while APM provides tracing alone. By collecting and aggregating these various types of data from multiple sources, observability offers a much broader view of the overall system and application health and performance, with the ability to gain much deeper insights into potential performance issues. Another important distinction is that open source tools are the foundation of observability, but not APM. While some APM vendors have recently open-sourced the client side of their stack, the server side of all the popular commercial APM solutions is still proprietary. These distinctions do not mean that observability and APM are unconnected. Application performance management can still be an important component of an observability implementation.


How Conversational Programming Will Democratize Computing

The scope of a conversation must mirror a human “mental stack”, not that of a computer. When I use a conventional Windows interface on my laptop, I am confronted with the computer’s file system which is presented as folders and files. That effort is reversed in conversational programming — the LLM system has to work with my limited human cognition facilities. This means creating things in response to requests, and reporting outcomes at the same level that I asked for them. Returning arcane error codes in response to requests will immediately break the conversation. We have already seen ChatGPT reflect on its errors, which means a conversation should retain its value for the user. ... The industrialization of LLMs is the only thing we can be reasonably sure about, because the investment has already been made. However, the rapid advancement of GPT systems will likely run aground in the same areas that other large-scale projects have in the past. The lack of collaboration between large competitors has eroded countless good ideas that depended on interoperability.


Dark Side of DevOps - the Price of Shifting Left and Ways to Make it Affordable

On the one hand, not having a gatekeeper feels great. Developers don’t have to wait for somebody’s approval - they can iterate faster and write better code because their feedback loop is shorter, and it is easier to catch and fix bugs. On the other hand, the added cognitive load is measurable - all the tools and techniques that developers have to learn now require time and mental effort. Some developers don’t want that - they just want to concentrate on writing their own code, on solving business problems. ... However, as companies grow, so does the complexity of their IT infrastructure. Maintaining dozens of interconnected services is not a trivial task anymore. Even locating their respective owners is not so easy. At this point, companies face a choice - either reintroducing the gatekeeping practices that negatively affect productivity, or to provide a paved path - a set of predefined solutions that codifies the best practices, and takes away mental toil, allowing developers to concentrate on solving business problems.


Why generative AI will turbocharge low-code and no-code development

Generative AI's integration into low-code and no-code platforms will lower the barriers to adoption of these development environments in enterprises, agreed John Bratincevic, principal analyst at Forrester. “The integration of generative AI will see adoption of low-code by business users, since the learning curve for getting started on developing applications will be even lower,” Bratincevic said. The marriage of generative AI with low-code and no-code platforms will aid professional developers as well, analysts said. ... “These generative AI coding capabilities will be most helpful for developers working on larger projects that are looking for shortcuts to support commoditized or common sense requests,” said Hyoun Park, principal analyst at Amalgam Insights. “Rather than searching for the right library or getting stuck on trying to remember a specific command or term, GPT and other similar generative AI tools will be able to provide a sample of code that developers can then use, edit, and augment,” Park said.


Start with Sound Policies, Then Customize with Required Exceptions

Number one is our culture of security, not just within the cybersecurity organization, but broader than the cybersecurity organization looking at the entire Providence org – instilling security practices into our business practices, or business processes, instilling security mindset into our caregivers, because our caregivers truly are on the front lines of the cybersecurity battlefield. They’re the ones that are receiving phishing emails, they’re the ones that are making decisions on what they click on, what they don’t click on, interactions with our clinical device vendors, or clinical application vendors. They’re making risk choices every day. So informing them about security, training them on security, and instilling security culture – broader than just the security organization – has been a real focus of ours this year. Another focus of ours has been on implementing or continuing the journey, I should say, toward a zero trust approach here at Providence. And when I say zero trust, a lot of people use the term, “never trust, always verify.” 


Leap of Faith: Building Trust and Commitment with Engineering

Leaping before you’re ready will result in disappointment if not outright disaster. It is important to understand what knowledge and muscle is required along the various stages that lead to full Engineering Trust & Autonomy. Each organization must determine this trust criteria for themselves, however, it is imperative to recognize starting from the future end-state goal and working backwards promotes the greatest benefit (e.g., innovative inspirational differentiation). To be most effective, seek out Leading teams already doing this in your organization. They do exist, but they most likely are considered one-offs, rogue, and exceptions to the internal norm. Good. That’s what you’re looking for! ... Once trust criteria is shared and definitive trust-boundaries are in place, the hardest piece of this puzzle must be executed: Executive Leadership and Individual Commitment Putting your strategy into play takes time and during that time doubts will creep in. This is normal, however, there are a few tricks to leverage that ensure you stay the course


Used Routers Often Come Loaded With Corporate Secrets

The big danger is that the wealth of information on the devices would be valuable to cybercriminals and even state-backed hackers. Corporate application logins, network credentials, and encryption keys have high value on dark web markets and criminal forums. Attackers can also sell information about individuals for use in identity theft and other scamming. Details about how a corporate network operates and the digital structure of an organization are also extremely valuable, whether you're doing reconnaissance to launch a ransomware attack or plotting an espionage campaign. For example, routers may reveal that a particular organization is running outdated versions of applications or operating systems that contain exploitable vulnerabilities, essentially giving hackers a road map of possible attack strategies. ... Since secondhand equipment is discounted, it would potentially be feasible for cybercriminals to invest in purchasing used devices to mine them for information and network access and then use the information themselves or resell it.


ChatGPT may hinder the cybersecurity industry

ChatGPT’s AI technology is readily available to most of the world. Therefore, as with any other battle, it’s simply a race to see which side will make better use of the technology. Cybersecurity companies will need to continuously combat nefarious users who will figure out ways to use ChatGPT to cause harm in ways that cybersecurity businesses haven’t yet fathomed. And yet this fact hasn’t deterred investors, and the future of ChatGPT looks very bright. With Microsoft investing $10 billion in Open AI, it’s clear that ChatGPT’s knowledge and abilities will continue to expand. For future versions of this technology, software developers need to pay attention to its lack of safety measures, and the devil will be in the details. ChatGPT probably won’t be able to thwart this problem to a large degree. It can have mechanisms in place to evaluate users’ habits and home in on individuals who use obvious prompts like, “write me a phishing email as if I’m someone’s boss,” or try to validate individuals’ identities. Open AI could even work with researchers to train its datasets to evaluate when their text has been used in attacks elsewhere.


A New Era of Natural Language Search Emerges for the Enterprise

Due to the statistical nature of their underlying technology, chatbots can hallucinate incorrect information, as they do not actually understand the language but are simply predicting the next best word. Often, the training data is so broad that explaining how a chatbot arrived at the answer it gave is nearly impossible. This “black box” approach to AI with its lack of explainability simply will not fly for many enterprise use cases. Welsh gives the example of a pharmaceutical company that is delivering answers to a healthcare provider or a patient who visits its drug website. The company is required to know and explain each search result that could be given to those asking questions. So, despite the recent spike in demand for systems like ChatGPT, adapting them for these stringent enterprise requirements is not an easy task, and this demand is often unmet, according to Welsh. ... Welsh predicts the companies that will win during this new era of the enterprise search space are those that had the foresight to have a product on the market now, and though the competition is currently heating up, some of these newer companies are already behind the curve. 



Quote for the day:

"Leaders must be good listeners. It's rule number one, and it's the most powerful thing they can do to build trusted relationships." -- Lee Ellis