Daily Tech Digest - September 19, 2023

Experts: 'Quiet cutting' employees makes no sense, and it's costly

The practice involves reassigning workers to roles that don’t align with their career goals to achieve workforce reduction by voluntary attrition — allowing companies to avoid paying costly severance packages or unemployment benefits. “Companies are increasingly using role reassignments as a strategy to sidestep expensive layoffs,” said Annie Rosencrans, people and culture director at HiBob, a human resource platform provider. “By redistributing roles within the workforce, organizations can manage costs while retaining valuable talent, aligning with the current trend of seeking alternatives to traditional layoffs.” ... The optics around quiet cutting and its effects on employee morale is a big problem, however, and experts argue it’s not worth the perceived cost savings. Companies reassigning workers to jobs that may not fit their hopes for a career path or align with their skills can be demoralizing to remaining workers and lead to “disengagement,” according to Chertok. He argued that the quiet cutting trend isn’t necessarily intentional; it's more indicative of corporate America’s need to reprioritize how talent is moved around within an organization. 


Why We Need Regulated DeFi

One of DeFi´s greatest challenges are liquidity issues. In a decentralized exchange, liquidity is added and owned by users, who often abandon one protocol for another offering better rewards thus resulting in unstable liquidity on DeFi protocols. A liquidity pool is a group of digital assets gathered to facilitate automated and permissionless trading on a decentralized exchange platform. The users of such exchange platforms don’t rely on a third party to hold funds but transact with each other directly. ... There are many systemic risks currently present in DeFi. For example, potential vulnerabilities in smart contracts can expose users to security breaches. DeFi platforms are often interconnected, meaning a problem on one platform can quickly spread and impact others, potentially causing systemic failures. Another potential systemic risk is the manipulation or failure of oracles, which bring real-world data onto the blockchain. This can result in bad decisions and lead to losses. Ultimately, regulated DeFi can help enforce security standards, fostering trust among users.


Microsoft Azure Data Leak Exposes Dangers of File-Sharing Links

There are so many pitfalls in setting up SAS tokens that Wiz's Luttwak recommends against ever using the mechanism to share files from a private cloud storage account. Instead, companies should have a public account from which resources are shared, he says. "This mechanism is so risky that our recommendation is, first of all, never to share public data, within your storage account — create a completely separate storage account only for public sharing," Luttwak says. "That will greatly reduce the risk of misconfiguration. You want to share public data, create a public data externally storage account and use only that." For those companies that continue to want to share specific files from private storage using SAS URLs, Microsoft has added the capability as part of GitHub's monitoring of the exposure of credentials and secrets. The company has rescanned all repositories, the company stated in its advisory. Microsoft recommends that Azure users limit themselves to short-lived SAS tokens, apply the principle of least privilege, and have a revocation plan.


Chaos Engineering: Path To Build Resilient and Fault-Tolerant Software Applications

The objective of chaos engineering is to unearth system restraints, susceptibilities, and possible failures in a controlled and planned manner before they exhibit perilous challenges resulting in severe impact on the organizations. Few of the most innovative organizations based on learning from past failures understood the importance of chaos engineering and realized it as a key strategy to unravel profound hidden issues to avoid any future failures and impacts on business. Chaos engineering lets the application developers forecast and detect probable collapses by disrupting the system on purpose. The disruption points are identified and altered based on potential system vulnerabilities and weak points. This way the system deficiencies are identified and fixed before they degrade into an outage. Chaos engineering is a growing trend for DevOps and IT teams. A few of the world’s most technologically innovative organizations like Netflix and Amazon are pioneers in adopting chaos testing and engineering. 


Unregulated DeFi services abused in latest pig butchering twist

At first glance, the pig butchering ring tracked by Sophos operates in much the same way as a legitimate one, establishing pools of cryptocurrency assets and adding new traders – or, in this case, victims – until such time as the cyber criminals drain the entire pool for themselves. This is what is known as a rug-pull. ... “When we first discovered these fake liquidity pools, it was rather primitive and still developing. Now, we’re seeing shā zhū pán scammers taking this particular brand of cryptocurrency fraud and seamlessly integrating it into their existing set of tactics, such as luring targets over dating apps,” explained Gallagher. “Very few understand how legitimate cryptocurrency trading works, so it’s easy for these scammers to con their targets. There are even toolkits now for this sort of scam, making it simple for different pig butchering operations to add this type of crypto fraud to their arsenal. While last year, Sophos tracked dozens of these fraudulent ‘liquidity pool’ sites, now we’re seeing more than 500.”


Time to Demand IT Security by Design and Default

Organizations can send a strong message to IT suppliers by re-engineering procurement processes and legal contracts to align with secure by design and security by default approaches. Updates to procurement policies and processes can set explicit expectations and requirements of their suppliers and flag any lapses. This isn’t about catching vendors out – many will benefit from the nudge. Changes in procurement assessment criteria can be flagged to IT suppliers in advance to give them a chance to course-correct. Suppliers can then be assessed against these yardsticks. If they fail to measure up, organizations have a clear justification to stop doing business with them. The next step is to create liability or penalty clauses in contracts that force IT vendors to share security costs for fixes or bolt-ons. This will drive them to devote more resources to security and prevent rather than scramble to cure security risks. Governments can support this by introducing laws that make it easier to claim under contracts for poor security. 


DeFi as a solution in times of crisis

The collapse of Silicon Valley Bank in March 2023 shows that even large banks are still vulnerable to failure. But instead of requiring trust that their money is still there, Web3 users can verify their holdings directly on chain. Additionally, blockchain technology allows for a more efficient and decentralized financial landscape. The peer-to-peer network pioneered by Bitcoin means that investors can hold their own assets and transact directly with no middlemen and significantly lower fees. And unlike with traditional banks, the rise of DeFi sectors like DEXs, lending and liquid staking means individuals can now have full control over exactly how their deposited assets are used. Inflation is yet another ongoing problem that crypto and DeFi help solve. Unlike fiat currencies, cryptocurrencies like bitcoin have a fixed total supply. This means that your holdings in BTC cannot be easily diluted like if you hold a currency such as USD. While a return to the gold standard of years past is sometimes proposed as a potential solution to inflation, adopting crypto as legal tender would have a similar effect while also delivering a range of other benefits like enhanced efficiency.


Cyber resilience through consolidation part 1: The easiest computer to hack

Most cyberattacks succeed because of simple mistakes caused by users, or users not following established best practices. For example, having weak passwords or using the same password on multiple accounts is critically dangerous, but unfortunately a common practice. When a company is compromised in a data breach, account details and credentials can be sold on the dark web and attackers then attempt the same username-password combination on other sites. This is why password managers, both third-party and browser-native, are growing in utilization and implementation. Two-factor authentication (2FA) is also growing in practice. This security method requires users to provide another form of identification besides just a password — usually via a verification code sent to a different device, phone number or e-mail address. Zero trust access methods are the next step. This is where additional data about the user and their request is analyzed before access is granted. 


AI for Developers: How Can Programmers Use Artificial Intelligence?

If you write code snippets purely by hand, it is prone to errors. If you audit existing code by hand, it is prone to errors. Many things that happen during software development are prone to errors when they’re done manually. No, AI for developers isn’t completely bulletproof. However, a trustworthy AI tool can help you avoid things like faulty code writing and code errors, ultimately helping you to enhance code quality. ... AI is not 100% bulletproof, and you’ve probably already seen the headlines: “People Are Creating Records of Fake Historical Events Using AI“; “Lawyer Used ChatGPT In Court — And Cited Fake Cases. A Judge Is Considering Sanctions“; “AI facial recognition led to 8-month pregnant woman’s wrongful carjacking arrest in front of kids: lawsuit.” This is what happens when people take artificial intelligence too far and don’t use any guardrails. Your own coding abilities and skill set as a developer are still absolutely vital to this entire process. As much as software developers might love to completely lean on an AI code assistant for the journey, the technology just isn’t to that point.


The DX roadmap: David Rogers on driving digital transformation success

companies mistakenly think that the best way to achieve success is by committing a lot of resources and focusing on implementation at all costs with the solution they have planned. Many organizations get burned by this approach because they don’t realize that markets are shifting fast, new technologies are coming in rapidly, and competitive dynamics are changing swiftly in the digital era. For example, CNN decided to get into digital news after looking at many benchmarks and reading several reports, thinking subscribers will pay monthly for a standalone news app. It was a disaster and they shut down the initiative within a month. To overcome this challenge, companies must first unlearn the habit of assuming things they know that they don’t know and are trying to manage through planning. They should rather manage through experimentation. CIOs can help their enterprises in this area. They must bring what they have learned in their evolution towards agile software development over the years and help apply these rules of small teams, customer centricity, and continuous delivery to every part of the business.



Quote for the day:

"Strategy is not really a solo sport _ even if you_re the CEO." -- Max McKeown

Daily Tech Digest - September 18, 2023

The ‘Great Retraining’: IT upskills for the future

As the technology ecosystem expands, Servier Pharmaceuticals’ Yunger believes cultivating hard-to-find skill sets from within is instrumental to future-proofing the IT organization. The company, a Google Cloud Platform shop, came face-to-face with that reality when it became difficult to find specialists, shifting its emphasis to growing its own talent. Yunger takes a talent lifecycle management approach that considers the firm’s three- to five-year strategy, aligns it to the requisite IT skills, and then matches the plan to individualized development and training programs. “We provide our vision of the future to our existing team and give them an opportunity to self-select into those paths to meet our future needs,” he explains. “The better our long-term vision, the more time we have to give our team the chance to learn and grow.” The University of California, Riverside, which is undertaking a similar practice to nurture IT talent from within, makes a concerted effort to start any large-scale reskilling initiative with those most willing to embrace change. 


The double-edged sword of AI in financial regulatory compliance

As fraudsters obtain more personal data and create more believable fake IDs, the accuracy of AI models improves, leading to more successful scams. The ease of creating believable identities enables fraudsters to scale identity-related scams with high success rates. Another key area where generative AI models can be employed by criminals is during various stages of the money laundering process, making detection and prevention more challenging. For instance, fake companies can be created to facilitate fund blending, while AI can simplify the generation of fake invoices and transaction records, making them more convincing. Furthermore, by bypassing KYC/CDD checks, it’s possible to create offshore accounts that hide the beneficial owners behind money laundering schemes. Generating false financial statements becomes effortless and AI can identify loopholes in legislation to facilitate cross-jurisdictional money movements.


Growing With AI Not Against It: How To Stay One Step Ahead

The key to effectively integrating AI into your business lies in proactive engagement. Rather than being passive recipients of technological changes, businesses should take an active role in understanding AI's potential applications. Reflecting on prominent companies such as Kodak and Nokia, which once dominated their respective industries, but ultimately faltered due to their reluctance to adopt technological advancements, underscores the importance of embracing AI as a transformative force. Consider Netflix's evolution from mailing in DVDs to streaming and their use of AI algorithms to recommend personalized content to users. ... In the face of advancing AI technology, the role of leaders is not merely to keep up but to set the pace. By actively engaging with AI, embracing it as a partner, learning from mistakes, and strategically adapting our approach, we position ourselves to harness its potential to foster innovation and enable us to navigate the future with confidence.


Metaverse and Telemedicine: Creating a Seamless Virtual Healthcare Experience

Firstly, the convergence of new core technologies like blockchain, digital twins, convergence, and virtual hospitals into the Metaverse will empower clinicians to offer more integrated treatment packages and programs. Secondly, using AR and VR technologies will enhance patient experiences and outcomes. Another benefit of the Metaverse for telemedicine is that it will facilitate collaboration among healthcare professionals. The ability to share information between healthcare professionals immediately will enable quicker pinpointing of the causes of illnesses. Moreover, the Metaverse will offer new opportunities to students and trainees to examine the human body in a safe, virtual reality educational environment. Surgeons are already using VR, AR, and AI technology to perform minimally-invasive surgeries, and the Metaverse opens up new frontiers in this area. Surgeons will be able to get a complete 360-degree view of a patient’s body, allowing them to better perform complex procedures using these immersive technologies.


Adaptive Security: A Dynamic Defense for a Digital World

Adaptive security systems employ continuous monitoring to gain real-time insights into an organization's network, applications, and endpoints. This continuous data collection allows for the rapid detection of abnormal behavior and potential threats. ... Understanding the context of an activity is crucial in adaptive security. Systems analyze not only the behavior of individual elements but also the relationships between them. This context-awareness helps in distinguishing between normal and malicious activities, reducing false positives. ... Adaptive security leverages machine learning and artificial intelligence (AI) algorithms to process vast amounts of data and identify patterns indicative of threats. These algorithms can adapt and evolve their detection capabilities based on new information and emerging attack vectors. ... Automation is a core element of adaptive security. When a potential threat is detected, adaptive security systems can automatically respond by isolating affected systems, blocking suspicious traffic, or alerting security teams for further investigation. 


The Power Duo: How Platforms and Governance Can Shape Generative AI

As you catalog the tools in your organization, consider where most of your development takes place. Is it happening solely in notebooks requiring code knowledge? Are you versioning your work through a tool like Github, which is often confusing to a non-coding audience? How is documentation handled and maintained over time? Oftentimes, business stakeholders and consumers of the model are locked out of the development process because there is a lack of technical understanding and documentation. When work happens in a silo, hand-offs between teams can be inefficient and result in knowledge loss or even operational roadblocks. This leads to results that are not trusted, oreven worse, adoption of the outputs. Many organizations wait too long before leveraging business experts during the preparation and build stages of the AI lifecycle. ...  This might be because only some of the glued together infrastructure is understood by the business unit, the hand off between teams is clunky and poorly documented, or the steps aren’t clearly laid out in an understandable manner.


How India is driving tech developments at G20

While there were no major technology-related announcements, a lot of indirect spillovers can be found in discussions on artificial intelligence (AI) and crypto regulations, taking a human-centric approach to technology, digitisation of trade documents and tech-enabled development of agriculture and education. As a run-up, there were recommendations and policy actions for the business sector, including the Startup20 initiative to support startup companies and the focus on digital public infrastructure (DPI). The summit had also cast the spotlight on climate change commitments, clean energy, and sustainability development goals. Pradeep Gupta, founder of think tank Security and Policy Initiatives, noted that the emphasis on climate change initiatives at G20 would require IT to play a role in areas like equipment, data management and analytics. “Carbon credits cannot function without good AI and data technology in place,” he said. “DPI will also be a big lever for the industry.” V K Sridhar ... agreed that IT will be instrumental in driving all the climate change agreements that emerged at this G20 – both from a technology and administrative point of view. 


Executive Q&A: Developing Data-Focused Professionals

Many universities have been caught unprepared for the exploding demands in AI skills. Most educational programs are traditional (four years) and do not necessarily give students the specialized in-time skills they need for these jobs. Deloitte had an interesting article about “AI whisperers” as the job of the future, referring to enterprises’ need for employees who deeply understand machine learning algorithms, data structures, and programming languages. Such jobs are already being advertised. An institute of higher education needs to be agile enough to create concentrations and certificates that quickly provide students and existing employees with just-in-time skills. ... There is inertia, and you can argue it is by design: universities are most comfortable with a traditional four-year education. They know how to do that, and education boards that approve these programs are also comfortable with that format. However, a four-year education does not speak to all students or speak to their needs and where they are in life.


How to Become a Database Administrator

Capacity planning is a core responsibility of database administrators. Capacity planning is about estimating what resources will be needed – and available – in the future. These resources include computer hardware, software, storage, and connection infrastructure. Fortunately, planning for infrastructure-as-a-service (IaaS) is quite similar to planning for on-premise. The basic difference in planning is the additional flexibility offered by the cloud. This flexibility allows DBAs to plan for the business’s immediate needs instead of planning for needs three to four years in advance. DBAs can also make use of the cloud’s ability to quickly scale up or down to meet the client’s demands. ... The DBA must be consciously aware of the business’s changing demands and the tools offered in the various clouds. Organizing the business in preparation for surge events – such as Black Friday or the start of school in September – and using the on-demand scalability available in cloud platforms is a primary responsibility of the modern DBA. Anticipating and responding to cyclical demands or major events makes the organization much more efficient.


SSE vs SASE: What You Need to Know

The Security Service Edge (SSE) framework was also coined by Gartner, but several years later in 2021. The SSE framework retains most of the core elements of SASE. The key difference is that SSE is designed for IT environments where SD-WAN is not required. SSE fits well for networks that do not have multiple paths to reach destinations without a need for application-based routing decisions. SSE is responsible for secure web, cloud services, and application access. Some of the top business case scenarios in which SSE works best is VPN replacement for remote employees. ... Typically, those considering SSE want a purely cloud-based security platform that provides a range of security functions at the edge of the network. As with SASE, leading networking and security vendors also have SSE options. However, the cloud-native nature of SSE means it is often marketed as a single platform that can be easily deployed, managed, and scaled. For this reason, SSE will likely gain traction at organizations looking to simplify and scale security for remote workers and transition to cloud-native environments.



Quote for the day:

"Everything you want is on the other side of fear." -- Jack Canfield

Daily Tech Digest - September 17, 2023

Experiment: IT companies eager to hire self-taught pros

“Self-education can be a valuable pathway to a successful career in cybersecurity and IT,” he says. “However, it may be challenging for self-learners to gain a comprehensive understanding of complex topics without structured guidance.” He adds: “Many cybersecurity roles require certifications and degrees for the validation of skills and knowledge. And while self-learners can earn certifications through self-study, some employers may still prefer candidates with formal degrees or recognized certifications.” Traditional education often provides opportunities for networking and internships, which can also be essential to career growth. "It's a very exciting time for education right now. People who are yearning to learn have a myriad of choices. Traditional paths are no longer the only way to secure essential experience and expertise to build careers,” said Sharahn McClung, career coach at TripleTen, an online part-time coding bootcamp. She believes that self-education puts learners in the driver’s seat, and people can find what they need to fit their unique circumstances and goals.


Eliminate roles, not people: fine-tuning the talent search during times of change

When someone expresses an interest in something, whether it’s emerging tech or a new process, are they going to step up? Do they know what they claim to know? And at the end, are they excited about sharing that? If you see that passion, pick them up and put them where they want to be and you’ll have such greater morale and engagement. It really is something any organization can do; they just have to make the space for it. It’s something where any HR leader can ask an employee, “Are you doing something you’re passionate about? Is there something you want to learn more about? Would you rather grow more in your current role, or explore another facet of the business?” Ask and you’ll be amazed at the data you get from one well-crafted question. From there, you can create that talent bank that says, “Oh, Julia actually said she was really interested in mobile computing, so we’re picking you up and putting you right here.” It’s easily done and accomplished, but I’m also a big fan of demonstrating what you know. So if you’re passionate about something, you know the universal knowledge behind it.


Top Intent-Based Networking Benefits and Challenges Explained

Intent-based networking (IBN) is a software-enabled automation technique that improves network operations and uptime by combining machine learning, artificial intelligence, analytics, and orchestration. IBN allows for flexible and agile network design that optimizes the quality of service for end users, using an algorithm that automates much of the process and scales well at a low cost. While traditional approaches to network management can scale up to a certain point, they quickly run into problems as a network grows larger. IBN addresses these issues by automating processes based on intent, giving network administrators tools that make it easier to manage large networks. ... IBN architecture is guided by a high-level business policy derived from user feedback. The software then checks to see if a user’s query is doable and sends proposed setups to the network administrator for authorization. This means intent is translated into actionable plans by validating against current network constraints.
Older workers are skilled and attentive listeners and prove to be exceptional assets in the workplace due to their receptiveness to training. Their ability to grasp instructions effectively and apply them with minimal repetition is a valuable trait. ... Older talents make excellent employees due to their efficiency and the confidence they have in sharing their suggestions and ideas. Their extensive experience in various roles equips them with a deep understanding of how tasks can be executed more effectively, ultimately leading to cost savings for companies. Additionally, their years of experience have cultivated their self-assuredness, making them unafraid to communicate their insights and recommendations to management. ... Hiring older workers can lead to significant savings in labour costs. Many of them come with existing insurance coverage from previous employers or have supplementary sources of income, which makes them more open to accepting slightly lower wages for their desired positions. 


Are You a Disruptor or a Destructor? A Complete Guide to Innovation for Today's Leaders

Disruptive Innovation is a term coined by Clayton Christensen in 1997. It refers to a process where a smaller company, often with fewer resources, manages to challenge established industry leaders. The disruptors do this by targeting overlooked market segments or creating new markets altogether. Over time, these disruptors refine their products or services and start attracting a broader audience, eventually undermining the existing market leaders. ... On the flip side, Destructive Innovation refers to technologies or practices that harm or make existing models obsolete without adding significant value to the industry or consumers. ... the path you choose has profound implications for your business model, market positioning, and long-term sustainability. Whether you're a seasoned executive, a budding entrepreneur or a forward-thinking sales director, understanding these terms can help you steer your company in the direction that leads to long-term success rather than a short-lived buzz.


Platform Engineering: What’s Hype and What’s Not?

Rather than dealing a death blow to DevOps, a more accurate take is that platform engineering is the next evolution of DevOps and SRE (site reliability engineering). In particular, it benefits developers struggling with code production bottlenecks as they wait on internal approvals or fulfillment. It also helps devs deliver on their own timeline rather than that of their IT team. And it helps operator types (such as SREs or DevOps engineers) who are feeling the pain of repetitive request fulfillment and operational firefighting — busy work that keeps them from building their vision for the future. ... The agile development practices that are at the core of DevOps culture — such as collaboration, communication, and continuous improvement — have not extended to the operations domain. This has hobbled the ability of agile development teams to quickly deliver products. In order not to perpetuate this dynamic, DevOps team culture should evolve to support platform engineering, and platform teams should embrace DevOps team culture.


10 principles to ensure strong cybersecurity in agile development

Security is a team sport. Every developer needs to play their part in ensuring that code is free of security loopholes. Developers often lack the knowledge and understanding of security issues and they tend to prioritize software delivery over security matters. To empower developers, organizations must invest resources towards coaching, mentoring, and upskilling. This includes a combination of security training and awareness sessions, mentoring from senior developers, specialized agile security training events, and access to freely available resources such as OWASP, CWE, BSIMM (Building Security In Maturity Model), SAFECode, and CERT. ... It’s less costly and more efficient to bake security in from the start, rather than trying to add it after the cake comes out of the oven. Leadership must establish processes that help manage information risk throughout the entire development lifecycle. This includes agreeing on high-level application architecture from a security perspective, identifying a list of "security-critical" applications and features, performing a business impact assessment, conducting information risk and vulnerability assessments at early stages, and a process for reporting newly identified risks. 


“Embrace cybersecurity automation and orchestration, but in moderation,” says my puppy

There are three general principles to employ when using automation and orchestration to minimize these risks and maximize the gains in efficiency, cost reduction, and security effectiveness:Scale: automate at small scales, not large. Large-scale automation can be done, but is best done through incremental increases and gains over time rather than in monumental leaps and gains. Look and test: look at the blind spots that automation can cause and test actively with red teaming and purple teaming. If automation is driving analysts to investigate a certain way, occasionally send them different types of prompts or alerts or look at the data that is ignored. Check under the hood: make sure that those who are getting support and are growing their skills in the shadow of automation and orchestration understand how that happens. Encourage skepticism in the system itself in operations. Overall, automation and orchestration are both critical components of a strong cybersecurity strategy. Arguably, they may be necessary to grow in maturity and handle advanced threats at scale.


The future of private AI: open source vs closed source

When deciding which approach to take, investment is always a consideration. Developing private AI models in-house typically involves a greater investment than platform or public cloud options, as it requires businesses to fund and build a team of experts, including data scientists, data engineers and software engineers. On the other hand, taking a platform approach to private AI does not require a team of experts, which significantly reduces the complexity and cost associated with private AI deployment. Speed of deployment is another consideration. ... Another important factor to consider when choosing an AI strategy is whether to train AI using an open source AI or a closed AI model. While open source AI is pre-trained on huge sets of publicly available data, the security and compliance risks associated with this approach are significant. To mitigate risks, organisations can adopt a hybrid open source AI model, where their data is kept private but the code, training algorithms and architecture of the AI model are publicly available. Closed AI models, on the other hand, are kept private by the organisations that develop them, including the training data, AI codebase and underlying architecture. 


Domain-Driven Cloud: Aligning your Cloud Architecture to your Business Model

DDC extends the principles of DDD beyond traditional software systems to create a unifying architecture spanning business domains, software systems and cloud infrastructure. Our customers perpetually strive to align "people, process and technology" together so they can work in harmony to deliver business outcomes. However, in practice, this often falls down as the Business (Biz), IT Development (Dev) and IT Operations (Ops) all go to their separate corners to design solutions for complex problems that actually span all three. What emerges is business process redesigns, enterprise architectures and cloud platform architecture all designed and implemented by different groups using different approaches and localized languages. What’s missing is a unified architecture approach using a shared language that integrates BizDevOps. This is where DDC steps in, with a specific focus on aligning the cloud architecture and software systems that run on them to the bounded contexts of your business model, identified using DDD. 



Quote for the day:

"If you spend your life trying to be good at everything, you will never be great at anything." -- Tom Rath

Daily Tech Digest - September 08, 2023

Peril vs. Promise: Companies, Developers Worry Over Generative AI Risk

One widespread concern over AI is that the systems will replace developers: 36% of developers worry that they will be replaced by an AI system. Yet the GitLab survey also gave more weight to arguments that disruptive technologies result in more work for people: Nearly two-thirds of companies hired employees to help manage AI implementations. Part of the concern seem to be generational. More experienced developers tend not to accept the code suggestions made by AI systems, while more junior developers are more likely to accept them, Lemos says. Yet both are looking to AI to assist them with the most boring work, such as documentation and creating unit tests. "I'm seeing a lot more developers raising the idea of having their documentation written by AI, or having test coverage written by AI, because they care less about the quality of that code, but just that the test works," he says. "There's both a security and a development benefit in having better test coverage, and it's something that they don't have to spend time on."


Feds Urge Immediately Patching of Zoho and Fortinet Products

CISA found that beginning in January, multiple APT groups separately exploited two different critical vulnerabilities to gain unauthorized access and exfiltrate data from the organization. Both of the unrelated flaws - CVE-2022-47966 in Zoho ManageEngine and CVE-2022-42475 in Fortinet FortiOS SSL VPN - have been classified as being of critical severity, meaning they can be exploited to remotely execute code, allowing attackers to take control of the system and pivot to other parts of the network. Each of the vendors issued updates patching their flaws in late 2022. Researchers refer to these as N-day vulnerabilities, meaning known flaws, as opposed to zero-day vulnerability for which no patch is yet available. The alert, issued by CISA, the FBI and U.S. Cyber Command's Cyber National Mission Force, includes details of how attackers used each of the flaws to gain wider access to victims' networks. The advisory doesn't state which nation or nations' APT groups have been tied to known exploits of these flaws. 


Scrum Master Skills We Rarely Talk About: Change Management

The initial stride towards constructing a "compelling case for change" is the vision of the type of Organization we aspire to become. It's crucial to emphasize that the organization's mode of operation should never serve as the ultimate goal in itself. Rather, it serves as a supplementary element that "enables" the organization in the pursuit of its objectives. This, in turn, gives rise to the necessity for change, marking the starting point of the entire process. A clearly expressed need for change (or the response to the question "Why exactly?") opens the gateway to the subsequent consideration: how should our Organization function to realize its goals? This is what we refer to as the Ideal State. Once we've defined the Ideal State of the organization, we can precisely articulate the exact optimizations required, alongside the pivotal indicators we will employ to monitor our progress throughout the change process. The Optimization Goal acts as our compass, guiding the direction of change or indicating precisely what adjustments need to be made.


Cloud first is dead—cloud smart is what’s happening now

Cloud smart involves making the best use of cloud concepts whether they are on premises or off and fundamentally making the most rational choice of locality as part of the thinking. A cloud smart architectural approach is essential because it enables enterprises to optimize their on-premises IT infrastructure and leverage the benefits of the cloud as well. With cloud smart architecture, enterprises can design and deploy highly available, scalable, and resilient solutions that have cloud operating characteristics to adapt to their changing business needs. After the initial rush to public cloud, this belated dose of reality is a positive. It reflects the recognition that there needs to be a smarter balance right between what's on premises vs. what's in the public cloud. Knowing how to strike the right balance—with the understanding that not every application is meant for the cloud—can ensure that you optimize performance, reliability, and cost, driving better long-term outcomes for your organization.


Are We Ready for a World Without Passwords?

Passwordless authentication simply means eliminating passwords. FIDO Alliance introduced FIDO2, a universally accepted authentication protocol offering frictionless, phishing-resistant, passwordless authentication. FIDO2 allows users to authenticate a web, SaaS, or mobile application using native device biometrics or PIN from their laptop, desktop or mobile phone. The user can access any application with a simple swipe on the fingerprint reader, a face nod to the camera or by entering a static PIN on their device. FIDO2 passwordless authentication is MFA by default and phishing resistant since the attacker needs physical access to the device and also access to the user’s PIN or biometrics. FIDO2 uses cryptographic keys (public and private) where the private key and the user’s biometric data do not leave the user’s device, thereby protecting the user’s privacy. It also prevents user activity tracking across services since a unique set of credentials is generated for each service. 


Is Security a Dev, DevOps or Security Team Responsibility?

Security is not the job of any one group or type of role. On the contrary, security is everyone’s job. Forward-thinking organizations must dispense with the mindset that a certain team “owns” security, and instead embrace security as a truly collective team responsibility that extends across the IT organization and beyond. After all, there is a long list of stakeholders in cloud security, including: Security teams, who are responsible for understanding threats and providing guidance on how to avoid them; Developers, who must ensure that applications are designed with security in mind and that they do not contain insecure code or depend on vulnerable third-party software to run; ITOps engineers, whose main job is to manage software once it is in production and who therefore play a leading role both in configuring application-hosting environments to be secure and in monitoring applications to detect potential risks; DevOps engineers, whose responsibilities span both development and ITOps work, placing them in a position to secure code during both the development and production stages.


Windows desktop apps are the future (with or without Windows)

Microsoft is betting big on this with Windows 365. Currently available only for businesses, Windows 365 is a Windows desktop-as-a-service hosted by Microsoft. Businesses can set up their employees with remotely accessed Windows desktops. Those employees can access them through nearly any device: a Chromebook, Mac, iPad, Android tablet, smart TV, smartphone, or whatever — even from a PC. Microsoft is building better support for accessing Windows 365 desktops into Windows 11, letting you flip between your cloud PC and local PC from the “Task View” button on your taskbar or even boot straight to a Windows 365 cloud PC desktop on a physical Windows 11 PC. While this is only for businesses at the moment, internal documents show Microsoft is working on Windows 365 cloud PC plans for home users. It’s not just about Microsoft, either. Even Google now has a new solution for running Windows apps natively in ChromeOS called “ChromeOS Virtual App Delivery.” 


How Failures Lead to Innovation

When failure occurs, not giving up or abandoning your idea is essential. Instead, look at the problem differently and find a new solution. This process involves a series of steps that, when combined, can lead to groundbreaking innovation. First, there’s a need to reassess your vision and redefine your objectives. What was the original goal? Is it still relevant, or does the failure open up a new direction that could be more beneficial? Second, identify the root cause of the failure and understand its implications. This is where a deep dive into the details is crucial. In doing so, you might uncover overlooked opportunities or hidden insights. Third, brainstorm new solutions. Use the knowledge from the failure to think of innovative approaches or strategies that could work better. Fourth, prototype and test these new ideas. Not every new idea will be successful, but through prototyping and testing, you’ll get closer to finding a solution that works. Fifth, iterate on the process. Innovation is rarely a one-off event. It’s a continuous learning process, designing, testing, and refining.


Velocity Over Speed, A Winner Every Time

Precision Bias is the utterly false belief we can predict any time length ever. No one saw covid coming. So, every damn prediction at the time did not come true. And while most delays are not caused by such global meltdowns, they still happen. But the addiction to speed itself is one of the largest factors in slowing down our delivery times. To understand velocity, we have to understand value. Both intangible value and direct value. I call this ‘soaking in numbers’. When I am with a new client (read my article on clients vs. customers) I like to read here and learn every value metric they find important. I want mean time to recover. I want the number of new customers per day. I want net promoter scores, profitability, lead times, partner surveys, employee turnover, all of it. These are the language of value that a set of stakeholders uses to describe value. Notice how few of those measures involve speed numbers? I guesstimate that only 10-15 % of any set of measures will be speed related. In fact, speed will cause many of those metrics to fail. Too many new hires, too many orders, too many acquisitions.


How to Succeed with Unifying DataOps and MLOps Pipelines

How to actually integrate data and ML pipelines depends on an organization’s existing overall structure. “Organizations are essentially either centralized or decentralized,” Kobielus said. For those that are already centralized to one degree or another, unifying data and ML pipelines is really just a question of converging the existing back ends -- often in the form of a data lakehouse. In the case of a more decentralized organization, Kobielus explained, unification of the different back ends requires an abstraction layer that enables users to query data in a uniform, simplified way across all the disparate environments where it may reside. For many organizations, this layer is taking the form of a data mesh or a data fabric that consolidates access to data and analytics across a range of environments. “The bottom line for success,” Kobielus said, “is to what extent you can build more monetizable data and analytics and the degree to which you can automate all of it. That automation needs to happen on the back end.” 



Quote for the day:

"If you set your goals ridiculously high and it's a failure, you will fail above everyone else's success." --James Cameron

Daily Tech Digest - September 06, 2023

Open Source Needs Maintainers. But How Can They Get Paid?

The data show that not only are open source maintainers usually unaware of current security tools and standards, like software bills of materials (SBOMs) and supply-chain levels for software artifacts (SLSA), but they are largely unpaid and, to a frightening degree, on their own. A study released in May by Tidelift found that 60% of open source maintainers would describe themselves as “unpaid hobbyists.” And 44% of all maintainers said they are the only person maintaining a project. “Even more concerning than the sole maintainer projects are the zero maintainer projects, of which there are a considerable amount as well that are widely used,” Donald Fischer, CEO and co-founder of Tidelift, told The New Stack. “So many organizations are just unaware because they don’t even have telemetry, they have no data or visibility into that.” ... An even bigger threat to continuity in open source project maintenance is the “boss factor,” according to Fischer. The boss factor, he said, emerges when “somebody gets a new job, and so they don’t have as much time to devote to their open source projects anymore, and they kind of let them fall by the wayside.”


Your data is critical – do you have the right strategy in place for resilience?

Recovering multi-master databases requires specialist skills and understanding to prevent problems around concurrency. In effect, this means having one agreed list of transactions rather than multiple conflicting lists that might contradict each other. Similarly, you have to ensure that any recovery brings back the right data, rather than any corrupted records. Planning ahead on this process makes it much easier, but it also requires skills and experience to ensure that DR processes will work effectively. Alongside this, any DR plan will have to be tested to prove that it will work, and work consistently when it is most needed. Any plan around data has to take three areas into account – availability, restoration and cost. Availability planning covers how much work the organisation is willing to do to keep services up and running, while restoration covers how much time and data has to be recovered in the event of a disaster. Lastly, cost covers the amount of budget available to cover these two areas, and how much has to be spent in order to meet those requirements.


7 tough IT security discussions every IT leader must have

Cybercriminals never sleep; they’re always conniving and corrupting. “When it comes to IT security strategy, a very direct conversation must be held about the new nature of cyber threats,” suggests Griffin Ashkin, a senior manager at business management advisory firm MorganFranklin Consulting. Recent experience has demonstrated that cybercriminals are now moving beyond ransomware and into cyberextortion, Ashkin warns. “They’re threatening the release of personally identifiable information (PII) of organization employees to the outside world, putting employees at significant risk for identity theft.” ... The meetings and conversations should lead to the development or update of an incident response plan, he suggests. The discussions should also review mission-critical assets and priorities, assess an attack’s likely impact, and identify the most probable attack threats. By changing the enterprise’s risk management approach from matrix-based measurement (high, medium, or low) to quantitative risk reduction, you’re basing actual potential impact on as many variables as needed, Folk says.


Emerging threat: AI-powered social engineering

As malicious actors gain the upper hand, we could potentially find ourselves stepping into a new era of espionage, where the most resourceful and innovative threat actors thrive. The introduction of AI brings about a new level of creativity in various fields, including criminal activities. The crucial question remains: How far will malicious actors push the boundaries? We must not overlook the fact that cybercrime is a highly profitable industry with billions at stake. Certain criminal organizations operate similarly to legal corporations, having their own infrastructure of employees and resources. It is only a matter of time before they delve into developing their own deepfake generators (if they haven’t already done so). With their substantial financial resources, it’s not a matter of whether it is feasible but rather whether it will be deemed worthwhile. And in this case, it likely will be. What preventative measures are currently on offer? Various scanning tools have emerged, asserting their ability to detect deepfakes.


Scrum is Not Agile Enough

Scrum thrives in scenarios where the project’s requirements might evolve or where customer feedback is crucial because of its short sprints. It works well when a team can commit to the roles, ceremonies, and iterative nature of the framework. When there is a need for clear accountability and communication among team members, stakeholders, and customers, Scrum works better than Kanban which works on a less rigid task allocation. The problem is the scale at which Scrum is used. While there is some consensus on the strengths of the methodology, it is not applicable for all projects. One common situation engineers face is, in teams which build multiple applications, individuals can’t start a new story until all the ongoing stories are complete. The team members who’ve completed remain idle until each of them have finished their story, which is entirely inefficient. Long meetings are another pain point for users, there’s a substantial investment in planning and meetings. Significant time is allocated to discussing stories that sometimes require only 30 minutes for completion. 


Technology Leaders Can Turbocharge Their Company’s Growth In Five Ways

Some growth will be powered by new technologies; CIOs and other technology leaders can demonstrate how emerging technologies create specific growth opportunities. Instead of pitching random acts of metaverse or blockchain, which require radical changes in life or trade to matter, technology leaders can iterate on new technologies and infuse ideas from these into their own products. ... Outcomes of all kinds can always be improved — AI is just the newest tool in the improvement toolkit, joining analytics, automation and software. Personalization at scale is a good example of amplifying growth. Technology leaders should collaborate with marketing colleagues and mine databases to find better purchase signals that improve offers and outreach. They can also automate processes to streamline onboarding and improve revenue recognition. ... No technology leader and no company will do this alone. They will work with technology and service providers to build and operate the new capabilities, including those powered by generative AI.


Proposed SEC Cybersecurity Rule Will Put Unnecessary Strain on CISOs

In its current form, the proposed rule leaves a lot of room for interpretation, and it's impractical in some areas. For one, the tight disclosure window will put massive amounts of pressure on chief information security officers (CISOs) to disclose material incidents before they have all the details. Incidents can take weeks and sometimes months to understand and fully remediate. It is impossible to know the impact of a new vulnerability until ample resources are dedicated to remediation. CISOs may also end up having to disclose vulnerabilities that, with more time, end up being less of an issue and therefore not material. ... Another issue is the proposal's requirement to disclose circumstances in which a security incident was not material on its own but has become so "in aggregate." How does this work in practice? Is an unpatched vulnerability from six months ago now in scope for disclosure (given that the company didn't patch it) if it's used to extend the scope of a subsequent incident? We already conflate threats, vulnerabilities, and business impact.


Contending with Artificially Intelligent Ransomware

Deploying a malicious payload onto a targeted computer is a very complex task. It’s not a static executable that can be easily detected based on signatures. AI could generate a customized payload for each victim, progressively advancing within compromised systems with patience and precision. The key for successful malware lies in emulating normal, expected behavior to avoid triggering any defensive measures, even from vigilant users themselves. We’re witnessing genuinely authentic-looking software emerging in various distributions, ostensibly offering specific functionalities while harboring ulterior motives to earn users’ trust, eventually acting with a malicious intent. In this context, AI is entirely capable of streamlining the process, crafting software with dormant malicious capabilities primed for activation at a later point, possibly during the next update.


3 types of incremental forever backup

The first type of incremental forever backup is a file-level incremental forever backup product. This type of approach has actually been around for quite some time, with early versions of it available in the ‘90s. The reason why this is called a file-level incremental is that the decision to backup an item happens at the file level. If anything within a file changes, it will change its modification date , and the entire file will be backed up. ... Another incremental forever backup approach is block-level incremental forever. This method is similar to the previous method in that it will perform one full backup and a series of incremental backups – and will never again perform a full backup. In a block-level incremental backup approach, the decision to back up something will happen at the bit or block level. ... The final type of incremental forever backup is called source deduplication backup software, which performs the deduplication process at the very beginning of the backup. It will make the decision at the backup client as to whether or not to transfer a new chunk of data to the backup system.


The Future of Work is Remote: How to Prepare for the Security Challenges

When embracing hybrid or remote work, the lack of in-person contact among staff may have a less-than-ideal effect on corporate culture. For those “forced back” to the office, disgruntlement will breed resentment. In both cases, disengagement between staff and their employer will have an adverse effect on their attitudes toward the company and, consequently, heighten the risk of insider threats, either by accident, judgment errors or malicious intent. ... New security technology can streamline and bolster defenses but often falls short. Without human interaction and experience, these systems lack the context to make accurate decisions. As a result, they may generate false positives or miss real threats. Security technology is often designed to work with little or no human input, which can lead to problems when the system encounters something it doesn’t understand; for example, a new type of malware or a sophisticated attack. Security systems need to be regularly updated otherwise, they’re at risk of becoming obsolete. 



Quote for the day:

"Never say anything about yourself you do not want to come true." -- Brian Tracy

Daily Tech Digest - September 05, 2023

GenAI in productivity apps: What could possibly go wrong?

The first and most obvious risk is the accuracy issue. Generative AI is designed to generate content — text, images, video, audio, computer code, and so on — based on patterns in the data it’s been trained on. Its ability to provide answers to legal, medical, and technical questions is a bonus. And in fact, often the AIs are accurate. The latest releases of some popular genAI chatbots have passed bar exams and medical licensing tests. But this can give some users a false sense of security, as when a couple of lawyers got in trouble by relying on ChatGPT to find relevant case law — only to discover that it had invented the cases it cited. That’s because generative AIs are not search engines, nor are they calculators. They don’t always give the right answer, and they don’t give the same answer every time. For generating code, for example, large language models can have extremely high error rates, said Andy Thurai, an analyst at Constellation Research. “LLMs can have rates as high as 50% of code that is useless, wrong, vulnerable, insecure, and can be exploited by hackers,” he said. 


CFOs and IT Spending: Best Practices for Cost-Cutting

Auvik Networks’ Feller stressed it is important for CFOs not to come in and start slashing everything. “There was a reason why IT applications and services were purchased in the first place and, in today’s corporate environment, many of these systems are integrated with each other and into employees’ work processes,” he says. “CIOs should have a good idea of what’s critical and sensitive.” He says the way he tends to approach this is by working with the CIO to identify the applications that are main “sources of truth” for key corporate data. These tend to be the financial and accounting systems or enterprise resource planning (ERP), customer relationship management (CRM), human resources information system (HRIS), and often a business intelligence (BI) system. “For each of those key systems, we evaluate whether they are still the right choice for where the company has evolved and will they scale as the company grows,” he says. “Replacing one or more of those systems can be a big, complicated project but is often essential to a company’s success.”


Hackers Adding More Capabilities to Open Source Malware

Researchers observed that the malware samples are currently being used by multiple threat actors and various variants of this threat are already in the wild with threat actors improving its efficiency and effectiveness over time. The malware is capable of stealing sensitive information from infected systems including host information, screenshots, cached browser credentials and files stored on the system that match a predefined list of file extensions. It also attempts to determine the presence of credential databases for browser applications includin Chrome, Yandex, Edge and Opera. Once executed, the malware creates a working directory, and a file grabber executes and attempts to locate any files stored within the victim's Desktop folder that match a list of file extensions including .txt, .pdf, .doc, .docx, .xml, .img, .jpg and .png. The malware then creates a compressed archive called log.zip containing all of the logs and the data is transmitted to the attacker via Simple Mail Transfer Protocol "using credentials defined in the portion of code responsible for crafting and sending the message."


Connected cars and cybercrime: A primer

Connected car cybercrime is still in its infancy, but criminal organizations in some nations are beginning to recognize the opportunity to exploit vehicle connectivity. Surveying today’s underground message forums quickly reveals that the pieces could quickly fall into place for more sophisticated automotive cyberattacks in the years ahead. Discussions on underground crime forums around data that could be leaked and needed/available software tools to enable attacks are already intensifying. A post from a publicly searchable auto-modders forum about a vehicle’s multi-displacement system (MDS) for adjusting engine performance, is symbolic of the current activity and possibilities. Another, in which a user on a criminal underground forum offers a data dump from car manufacturer, points to the possible threats that likely are coming to the industry. Though they still seem to be limited to accessing regular stolen data, compromises and network accesses are for sale in the underground.


Identify Generative AI’s Inherent Risks to Protect Your Business

Generative AI models have basically three attack surfaces: the architecture of the model itself, the data it was trained on, and the data fed into it by end users. For example, adversarial attacks and data poisoning depend on the model’s training data having a security flaw and thus being open to manipulation and infiltration. This allows threat actors to inject incorrect or misleading information into the training data, which the model uses to generate responses, leading to inaccurate information presented as accurate by a trusted model and, subsequently, flawed decision-making. Model extraction attacks depend on the skill of the hacker to compromise the model itself. The threat actor queries the model to gain information about its structure and, therefore, determine the actions it executes and what its targets are. One goal of this sort of attack could be reverse-engineering the model’s training data, for instance, private customer data, or recreating the model itself for nefarious purposes. Notably, any of these attacks can take place before or after the model is installed at a user site. 


How attackers exploit QR codes and how to mitigate the risk

A common attack involves placing a malicious QR code in public, sometimes covering up a legitimate QR code, and when unsuspecting users scan the code they are sent to a malicious web page that could host an exploit kit, Sherman says. This can lead to further device compromise or possibly a spoofed login page to steal user credentials."This form of phishing is the most common form of QR exploitation," Sherman says. QR code exploitation that leads to credential theft, device compromise or data theft, and malicious surveillance are the top concerns to both enterprises and consumers, he says. If QR codes lead to payment sites, then users might divulge their passwords and other personal information that could fall into the wrong hands. "Many websites do drive-by download, so mere presence on the site can start malicious software download," says Rahul Telang, professor of information systems at Carnegie Mellon University’s Heinz College. 


The ‘IT Business Office’: Doing IT’s admin work right

Each IT manager has a budget to manage to. Sadly, in most companies budgeting looks more like a game of pin-the-tail-on-the-donkey than a well defined and consistent algorithm. In principle, a lot of IT staffing can be derived from a parameter-driven model. This can be hard to reconcile with Accounting’s requirements for budget development. With an IT Business Office to manage the relationship with Accounting, IT can explain its methods once, instead of manager-by-manager-by-manager. ... Business-wide, new-employee onboarding should be coordinated by HR, but more often each piece of the onboarding puzzle is left to the department responsible for that piece. An IT Business Office can’t and shouldn’t try to fix this often-broken process throughout the enterprise. But onboarding new IT employees is, if anything, even more complicated than onboarding anyone else’s employees. An IT Business Office can, if nothing else, smooth things out for newly hired IT professionals so they can start to work the day they show up for work.


MSSQL Databases Under Fire From FreeWorld Ransomware

According to an investigation by Securonix, the typical attack sequence observed for this campaign begins with brute forcing access into the exposed MSSQL databases. After initial infiltration, the attackers expand their foothold within the target system and use MSSQL as a beachhead to launch several different payloads, including remote-access Trojans (RATs) and a new Mimic ransomware variant called "FreeWorld," named for the inclusion of the word "FreeWorld" in the binary file names, a ransom instruction file named FreeWorld-Contact.txt, and the ransomware extension, which is ".FreeWorldEncryption." The attackers also establish a remote SMB share to mount a directory housing their tools, which include a Cobalt Strike command-and-control agent (srv.exe) and AnyDesk; and, they deploy a network port scanner and Mimikatz, for credential dumping and to move laterally within the network. And finally, the threat actors also carried out configuration changes, from user creation and modification to registry changes, to impair defenses.


Managing Data as a Product: What, Why, How

Applying product management principles to data includes attempting to address the needs of as many different potential consumers as possible. This requires developing an understanding of the consumer base. The consumers are typically in-house staff accessing the organization’s data. (The data is not being “sold,” but is being treated as a product available for distribution, by identifying the consumers’/in-house staff’s needs.) From a big-picture perspective, the business’s goal is to maximize the use of its in-house data. Managing data as a product requires applying the appropriate product management principles. ... The data as a product philosophy is an important feature of the data mesh model. Data mesh is a decentralized form of data architecture. It is controlled by different departments or offices – marketing, sales, customer service – rather than a single location. Historically, a data engineering team would perform the research and analytics, a process that severely limited research when compared to the self-service approach promoted by the data as a product philosophy, and the data mesh model.


Enterprise Architecture Must Look Beyond Venturing the Gap Between Business and IT

The architects should not be the ones managing and maintaining the repository by themselves. They should facilitate the rest of the organization to make sure that they can ask for a repository. Architecture needs to become part of every strategic and tactical role in your organization. I think EA is basically following the path that so many other industries and disciplines have followed already. It’s the path of democratization. Today, we all have our supercomputer in our pocket, meaning that we have more functionality than ever before. And we don’t even have to go to machine rule, we don’t even have to go to our desk anymore, we can just take it out of our pocket, and help us to make the right decisions of where we want to go, how we’re going to send an email, which decision we’re kind of making. This self-service way of doing that has really enabled organizations to be much more efficient, much more transparent, much more effective. And I think this is what we want to achieve with EA, as well.



Quote for the day:

“Just because you’re a beginner doesn’t mean you can’t have strength.” -- Claudio Toyama