Daily Tech Digest - September 21, 2023

6 deadly sins of enterprise architecture

The simplest way to build out enterprise software is to leverage the power of various tools, portals, and platforms constructed by outsiders. Often 90%+ of the work can be done by signing a purchase order and writing a bit of glue code. But trusting the key parts of an enterprise to an outside company has plenty of risks. Maybe some private equity firm buys the outside firm, fires all the good workers, and then jacks up the price knowing you can’t escape. Suddenly instantiating all your eggs in one platform starts to fail badly. No one remembers the simplicity and consistency that came from a single interface from a single platform. Spreading out and embracing multiple platforms, though, can be just as painful. The sales team may promise that the tools are designed to interoperate and speak industry standard protocols, but that gets you only halfway there. Each may store the data in an SQL database, but some use MySQL, others use PostgreSQL and others use Oracle. There’s no simple answer. Too many platforms creates a Tower of Babel. Too few brings the risk of vendor lock-in and all the pain of opening that email with the renewal contract. 


Manufacturing firms make early bets on the industrial metaverse

The building blocks of the industrial metaverse are “frequently proprietary, siloed and standalone,” according to a recent report by Miller and Forrester colleagues. Digital twins — which might use IoT sensor data and 3D modelling to provide a real-time picture of a piece of equipment or factory, for example — are perhaps closest to realization, but are still limited in some senses. “The reality today is that most digital twins are still asset- and vendor-specific,” Miller told Computerworld, with the same manufacturer responsible for both hardware and software. For example, an ABB robot may be sold with an ABB digital twin, or a Siemens motor will come with a Siemens digital twin — but getting them to work together can be a challenge. While these types of tools offer clear benefits for customers, firms that own multiple products from multiple vendors will eventually want “one digital twin of how the factory or the line is operating, not 100 digital twins of the different components,” said Miller. Even the most advanced precursor technologies, such as factory-spanning digital twins, tend to be the product of a partnership with one vendor.


How businesses can vet their cybersecurity vendors

Companies can’t assume that the vendor is telling the truth. Particularly in the authentication market, where there is currently no standardised testing to confirm solutions pass metrics such as ‘phishing resistance’. When talking to a vendor, whilst it may seem simple, the organisation should first ask the vendor: How does the tool prevent social engineering and AiTM attacks? Whilst some solutions might say passwordless or ‘phishing-resistant’, they could instead simply hide the password so that authentication is more convenient, but the vulnerability remains. The team needs to determine if the solution eliminates passwords from both the authentication flow and account recovery flow, should the user lose their typical login device. And the tool must implement “verifier impersonation protection” to thwart AiTM/proxy-based attacks. Getting the security team to conduct their research beforehand enables them to come prepared to ask detailed questions and can help bypass the buzzwords that vendors use to uncover the truth. To go a step further, vetting the vendor can allow security teams to learn more about the tool and uncover the truth.


Hidden dangers loom for subsea cables, the invisible infrastructure of the internet

Subsea cables can fall under a wide range of regulatory regimes, laws and authorities. At national level, there may be several authorities involved in their protection, including national telecom authorities, authorities under the NIS Directive, cybersecurity agencies, national coastguard, military, etc. There are also international treaties in place to be considered, establishing universal norms and the legal boundaries of the sea. ... Challenges for subsea cable resilience: Accidental, unintentional damage through fishing or anchoring has so far been the cause of most subsea cable incidents; Natural phenomena such as undersea earthquakes or landslides can have a significant impact, especially in places where there is a high concentration of cables; Chokepoints, where many cables are installed close to each other, are single points of failure, where one physical attack could strain the cable repair capacity; Physical attacks and cyberattacks should be considered as threats for the subsea cables, the landing points, and the ICT at the landing points.


Datacentre operators ‘hesitant’ over how to proceed with server farm builds as AI hype builds

“The developments in generative AI and the increasing use of a wide range of AI-based applications in datacentres, edge infrastructure and endpoint devices require the deployment of high-performance graphics processing units and optimised semiconductor devices,” said Alan Priestley, vice-president analyst at Gartner. “This is driving the production and deployment of AI chips.” And while Gartner’s figures suggest the AI trend is going to continue to take the world of tech by storm, the market watcher’s recently published Hype Cycle for emerging technologies lists generative AI as being at the “peak of inflated expectations”, which might go some way to explaining why operators are reluctant to rush to kit out their sites to accommodate this trend. For colocation operators that are targeting hyperscale cloud firms, many of which regularly talk up the potential for generative AI to transform how enterprises operate, there is perhaps less reticence, said Onnec’s Linqvist.


Developers: Is Your API Designed for Attackers?

The security firm analyzed 40 public breaches to see what role APIs played in security problems, which Snyder featured in his 2023 Black Hat conference presentation. The issue might be built-in vulnerabilities, misconfigurations in the API, or even a logical flaw in the application itself — and that means it falls on developers to fix it, Snyder said. “It’s a range of things, but it is generally with their own APIs,” Snyder told The New Stack. ”It is in their domain of influence, and honestly, their domain of control, because it is ultimately down to them to build a secure API.” The number of breaches analyzed is small — it was limited to publicly disclosed breaches — but Snyder said the problem is potentially much more pervasive. ... In the last couple of months, he said, security researchers who work on this space have uncovered billions of records that could have been breached through poor API design. He pointed to the API design flaws in basically every full-service carrier’s frequent flyer program, which could have exposed entire datasets or allowed for the awarding of unlimited miles and hotel points.


Rethinking Cybersecurity: The Power of the Hacker Mindset

Embracing a hacker mindset involves adopting an external viewpoint of your business to uncover vulnerabilities before they’re exploited. This includes embracing practices like ethical hacking and penetration testing. While forming a specialised ethical hacking team is an option, embedding this mindset within cyber teams and your wider business is equally effective. Key to this transformation is upskilling. Businesses should be offering training to encourage creative thinking when it comes to cybersecurity. Instead of waiting for breaches to learn from mistakes, being proactive is crucial. Regular, monthly upskilling for cybersecurity and IT teams, rather than every six months or even a year, keeps them on the front foot. Encouraging a hacking mindset also shouldn’t be confined to cyber experts; all employees should undergo cyber awareness training. In this fight, businesses and individuals aren’t alone. Numerous training platforms are available, but choosing those that concentrate on providing practical, hands-on skills rooted in real-world attack scenarios is essential. 


How to get started with prompt engineering

Joseph Reeve leads led a team of people working on features that require prompt engineering at Amplitude, a product analytics software provider. He has also built internal tooling to make it easier to work with LLMs. That makes him a seasoned professional in this emerging space. As he notes, "the great thing about LLMs is that there’s basically no hurdle to getting started—as long as you can type!" If you want to assess someone's prompt engineering advice, it's easy to test-drive their queries in your LLM of choice. Likewise, if you're offering prompt engineering services, you can be sure your employers or clients will be using an LLM to check your results. So the question of how you can learn about prompt engineering—and market yourself as a prompt engineer—doesn't have a simple, set answer, at least not yet. "We're definitely in the 'wild west' period," says AIPRM's King. "Prompt engineering means a lot of things to different people. To some it's just writing prompts. To others it's fine-tuning and configuring LLMs and writing prompts.


Australia’s new cybersecurity strategy: Build “cyber shields” around the country

The first shield proposes a long-term education of citizens and businesses so by 2030 they understand cyberthreats and how to protect themselves. This "shield" comes with a plan B that plans for citizens and businesses to have proper supports in place so that when they are the victim of cyber-attack, they're able to get back up off the mat very quickly. The second shield is for safer technology. The federal government will have software treated like any other consumer product that is deemed insecure. "So, in 2030 our vision for safe technology is a world where we have clear global standards for digital safety in products that will help us drive the development of security into those products from their very inception," O'Neil said. ... The fourth proposed shield will focus on protecting Australian's access to critical infrastructure, with the Home Affairs and Cybersecurity minister saying that "part of this year will be about government lifting up its own cyber defences to make sure we're protecting our country."


Modeling Asset Protection for Zero Trust – Part 2

The goal when modeling the data environment for a Zero Trust initiative is to have the information available to decide what data should be available when, where, and by whom. That requires you to know what data you have, its value to the business, and the risk level if lost. The information is used to inform an automated rules engine that enforces governance based on the state of the data request journey. It is not to define or modify a data model. Hopefully, you already have this information catalogued. From a digital asset perspective, most companies think of their data as their crown jewels so the data pillar might be the most important pillar. One challenge with data is that applications supply data access. Many applications are not written to support modern authentication mechanisms and don’t handle the protocols needed to integrate with contemporary data environments so the applications might not support a Zero Trust data model. Hopefully, you’re already experimenting with current mechanisms for your microservice environment. But, if not, as with any elephant, you eat it one bite at a time.



Quote for the day:

"Your time is limited, so don't waste it living someone else's life." -- Steve Jobs

Daily Tech Digest - September 20, 2023

Innovation needs to be a culture, not just a practice

It is important to build open organisational structures that let teams avoid obstacles and hierarchies that frequently stifle creativity. An inventive culture places a strong emphasis on being flat and agile. Employees are more able to freely communicate their thoughts when they have direct access to decision-makers. The well-known sportswear company Nike is one example of this. All levels of staff members are welcome to work together on cutting-edge concepts and technologies at the company's "Innovation Kitchen." This open mindset has produced ground-breaking goods like the Nike Flyknit, which transformed the athletic footwear market. ... Most businesses have started encouraging the participation of employees across sectors in brainstorming sessions to think outside the box because they respect unusual thinking and believe there are no negative ideas. But in some circumstances, one should be ready to also support the genuinely absurd. Innovation requires a space where creativity can thrive.


Quantum Plus AI Widens Cyberattack Threat Concerns

The mind-boggling speed of quantum computing is a double-edged sword, however. On one hand, it helps solve difficult mathematical problems much faster. On the other, it would increase the cyberattack capabilities beyond comprehension. “When you marry quantum computing and AI together, you can have an exponential increase in the advantages that both can offer,” said Dana Neustadter, director of product management for security IP at Synopsys. “Quantum computing will be able to enhance AI accuracy, speed, and efficiency. Enhancing AI can be a game changer for the better for many reasons. Paired with quantum computing, AI will have greater ability to solve very complex problems. As well, it will analyze huge amounts of data needed to take decisions or make predictions more quickly and accurately than conventional AI.” Very efficient and resilient solutions for threat detection and secure management can be created with enhanced AI, transforming cybersecurity as we know it today. “However, if used for the wrong reasons, these powerful technologies also can threaten cybersecurity,” Neustadter said.


IoT startups fill security gaps

Insider risks have long been one of the most difficult cybersecurity threats to mitigate. Not only can power users, such as C-level executives, overrule IT policies, but partners and contractors often get streamlined access to corporate resources, and may unintentionally introduce risks in the name of expediency. As IoT continues to encompass such devices as life-saving medical equipment and self-driving vehicles, even small risks can metastasize into major security incidents. For San Francisco-based self-driving car startup Cruise, a way to mitigate the many risks associated with connected cars is to conduct thorough risk assessments of partners and suppliers. The trouble is that third-party assessments were such a time-consuming and cumbersome chore that the existing process was not able to scale as the company grew. “The rise in cloud puts a huge stress on understanding the risk posture of our partners. That is a complex and non-trivial thing. Partnerships are always under pressure,” said Alexander Hughes, Director of Information Security, Trust, and Assurance at Cruise.


Expert: Keep Calm, Avoid Overhyping China's AI Capabilities

"Some of China's bottlenecks relate to a reliance on Western companies to open up new paradigms, China's censorship regime, and computing power bottlenecks," Ding said. "I submitted three specific policy recommendations to the committee, but I want to emphasize one, which is, 'Keep calm and avoid overhyping China's AI capabilities.'" Policymakers also erroneously think anything that helps China around artificial intelligence is going to hurt the U.S. even though giants in China's AI industry like ByteDance, Alibaba and Baidu end up generating a lot of profits that come back into the U.S. economy and hopefully get reinvested into American productivity, according to Ding. "It's a more difficult question than just, 'Any investment in China's AI sector means it's harmful to U.S. national security,'" Ding said. "Continuing to maintain the openness of these global innovation networks is always going to favor the U.S. in the long-run in terms of our ability to run faster."


Beyond Spreadsheets: How Data-Driven Organizations Outperform the Rest

Creating a data-driven culture must start at the executive level to drive the understanding that data is central to the operations and success of your organization, as well as to decision-making at every level. It begins with communicating the importance of data, making it a corporate initiative. From there must follow implementing the data infrastructure and analytics tools that enable every role to get the data needed to drive evidenced-based decision-making. There is no right or wrong organizational structure to create a data-driven culture. Still, creating and assigning roles and responsibilities that will work for your organization, and then staffing and training accordingly, are essential. You may choose to train most of your staff to understand and support analytics, or you may rely on a few for performing analytics while conveying across your organization the overall importance and requirements of using data and analytics to drive desired results. 


Modeling Asset Protection for Zero Trust – Part 1

For operating your IT environment, the Security, Information, and Event Management (SIEM) system must be a good fit for the infrastructure. Once you have a complete inventory of your infrastructure, we recommend you complete an architectural-level evaluation of your SIEM to ensure good alignment. ... The evaluation should include the cost of setup and three years of operations, evaluation of organizational competence and available training for each, and the features of each against your IT landscape. As you evaluate your SIEM environment, consider evaluating your Extended Detection and Response (XDR) capability and performing a similar architectural evaluation. You might consider this part of your SIEM solution or treat it separately and it might be operated by a separate group. XDR also might not fit well into any pillar evaluation so could be overlooked if not captured here. Zero Trust requires identification and valuation of all information technology (IT) assets, automated enforcement of governance, and automated detection, response, and remediation to threats and attacks.


Data Engineer vs. Data Analyst

Data engineers play a pivotal role in establishing and maintaining robust Data Governance practices. They are responsible for designing and implementing data pipelines, ensuring that data is collected, stored, and processed accurately. By implementing rigorous quality checks during the extract, Transform, load (ETL) process, they guarantee that the data is clean and reliable for analysis. On the other hand, data analysts rely on high-quality and trustworthy data to derive meaningful insights. They work closely with the data engineer to define standards for data collection, storage, and usage. ... So, a crucial similarity between data engineers and data analysts is their shared emphasis on teamwork and collaboration. Both roles recognize that combining their expertise can lead to more accurate insights and better decision-making. Moreover, teamwork enables knowledge sharing between data engineers and analysts. They can exchange ideas, techniques, and best practices, enhancing their individual skill sets while collectively driving innovation in Data Management and analysis.


What AppSec and developers working in cloud-native environments need to know

With the emergence of IaaS, PaaS, and IaaS models, the definition of an application extends to include the associated runtime environment and the underlying infrastructure. Applications are now not just bundles of code, but holistic systems that include the virtualized hardware resources, operating systems, databases, and network configurations they rely on. The advent of microservices and containerization, where an application can consist of many independently deployable components each running in its own environment, further complexifies this definition. In a cloud-native application, each microservice with its code, dependencies, and environment could be considered an “application” in its own right. The introduction of Infrastructure as Code (IaC) has further complicated the definition of applications. IaC is the practice of managing and provisioning infrastructure through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.


Could the DOJ’s Antitrust Trial vs Google Drive More Innovation?

The thought process among regulators, he says, might be that the antitrust case against Microsoft brought about change and created opportunities for more competition -- a similar attempt with Google may be worth the effort. “This particular antitrust case really focuses narrowly on the company’s popular search engine, and it alleges that Google uses their 90% market share to illegally throttle competition in search and search advertising,” Kemp says. CTO Jimmie Lee with XFactor.io, a developer of a business decision platform, says he can understand some of big tech’s perspective having come from Meta, Facebook’s parent, and Microsoft. “When you’re in the company, it feels very different from being on the outside,” he says. “From the inside, you see the strength of the technology and how you can better add security and privacy and features and functionalities throughout the entire stack and workflow.”


4 steps for purple team success

Purple teaming is a function of collaborative security. Historically, it has literally brought together offensive security engineers or pen testers from the red side of the team and investigators, detection engineers, and CTI analysts from the blue side of the team. More recently, however, purple teams have looked very different, including a variety of members including developers, architects, information system security officers, software engineers, DFIR teams, and BCP personnel as well as other departments. To view the purple team simply as a tactical unit would be an oversimplification. Beyond the immediate operational benefits, the true value of a purple team lies in fostering cyber resilience. It is about building an organizational capability that can not only withstand cyber threats but also adapt and recover swiftly from them. By collaboratively assessing, learning, and adapting, the purple team approach instills a resilience mindset, ensuring that the organization is prepared for evolving cyber threats and is capable of bouncing back even when breaches occur.



Quote for the day:

"If you don’t build your dream, someone else will hire you to help them build theirs." -- Dhirubhai Ambani

Daily Tech Digest - September 19, 2023

Experts: 'Quiet cutting' employees makes no sense, and it's costly

The practice involves reassigning workers to roles that don’t align with their career goals to achieve workforce reduction by voluntary attrition — allowing companies to avoid paying costly severance packages or unemployment benefits. “Companies are increasingly using role reassignments as a strategy to sidestep expensive layoffs,” said Annie Rosencrans, people and culture director at HiBob, a human resource platform provider. “By redistributing roles within the workforce, organizations can manage costs while retaining valuable talent, aligning with the current trend of seeking alternatives to traditional layoffs.” ... The optics around quiet cutting and its effects on employee morale is a big problem, however, and experts argue it’s not worth the perceived cost savings. Companies reassigning workers to jobs that may not fit their hopes for a career path or align with their skills can be demoralizing to remaining workers and lead to “disengagement,” according to Chertok. He argued that the quiet cutting trend isn’t necessarily intentional; it's more indicative of corporate America’s need to reprioritize how talent is moved around within an organization. 


Why We Need Regulated DeFi

One of DeFi´s greatest challenges are liquidity issues. In a decentralized exchange, liquidity is added and owned by users, who often abandon one protocol for another offering better rewards thus resulting in unstable liquidity on DeFi protocols. A liquidity pool is a group of digital assets gathered to facilitate automated and permissionless trading on a decentralized exchange platform. The users of such exchange platforms don’t rely on a third party to hold funds but transact with each other directly. ... There are many systemic risks currently present in DeFi. For example, potential vulnerabilities in smart contracts can expose users to security breaches. DeFi platforms are often interconnected, meaning a problem on one platform can quickly spread and impact others, potentially causing systemic failures. Another potential systemic risk is the manipulation or failure of oracles, which bring real-world data onto the blockchain. This can result in bad decisions and lead to losses. Ultimately, regulated DeFi can help enforce security standards, fostering trust among users.


Microsoft Azure Data Leak Exposes Dangers of File-Sharing Links

There are so many pitfalls in setting up SAS tokens that Wiz's Luttwak recommends against ever using the mechanism to share files from a private cloud storage account. Instead, companies should have a public account from which resources are shared, he says. "This mechanism is so risky that our recommendation is, first of all, never to share public data, within your storage account — create a completely separate storage account only for public sharing," Luttwak says. "That will greatly reduce the risk of misconfiguration. You want to share public data, create a public data externally storage account and use only that." For those companies that continue to want to share specific files from private storage using SAS URLs, Microsoft has added the capability as part of GitHub's monitoring of the exposure of credentials and secrets. The company has rescanned all repositories, the company stated in its advisory. Microsoft recommends that Azure users limit themselves to short-lived SAS tokens, apply the principle of least privilege, and have a revocation plan.


Chaos Engineering: Path To Build Resilient and Fault-Tolerant Software Applications

The objective of chaos engineering is to unearth system restraints, susceptibilities, and possible failures in a controlled and planned manner before they exhibit perilous challenges resulting in severe impact on the organizations. Few of the most innovative organizations based on learning from past failures understood the importance of chaos engineering and realized it as a key strategy to unravel profound hidden issues to avoid any future failures and impacts on business. Chaos engineering lets the application developers forecast and detect probable collapses by disrupting the system on purpose. The disruption points are identified and altered based on potential system vulnerabilities and weak points. This way the system deficiencies are identified and fixed before they degrade into an outage. Chaos engineering is a growing trend for DevOps and IT teams. A few of the world’s most technologically innovative organizations like Netflix and Amazon are pioneers in adopting chaos testing and engineering. 


Unregulated DeFi services abused in latest pig butchering twist

At first glance, the pig butchering ring tracked by Sophos operates in much the same way as a legitimate one, establishing pools of cryptocurrency assets and adding new traders – or, in this case, victims – until such time as the cyber criminals drain the entire pool for themselves. This is what is known as a rug-pull. ... “When we first discovered these fake liquidity pools, it was rather primitive and still developing. Now, we’re seeing shā zhū pán scammers taking this particular brand of cryptocurrency fraud and seamlessly integrating it into their existing set of tactics, such as luring targets over dating apps,” explained Gallagher. “Very few understand how legitimate cryptocurrency trading works, so it’s easy for these scammers to con their targets. There are even toolkits now for this sort of scam, making it simple for different pig butchering operations to add this type of crypto fraud to their arsenal. While last year, Sophos tracked dozens of these fraudulent ‘liquidity pool’ sites, now we’re seeing more than 500.”


Time to Demand IT Security by Design and Default

Organizations can send a strong message to IT suppliers by re-engineering procurement processes and legal contracts to align with secure by design and security by default approaches. Updates to procurement policies and processes can set explicit expectations and requirements of their suppliers and flag any lapses. This isn’t about catching vendors out – many will benefit from the nudge. Changes in procurement assessment criteria can be flagged to IT suppliers in advance to give them a chance to course-correct. Suppliers can then be assessed against these yardsticks. If they fail to measure up, organizations have a clear justification to stop doing business with them. The next step is to create liability or penalty clauses in contracts that force IT vendors to share security costs for fixes or bolt-ons. This will drive them to devote more resources to security and prevent rather than scramble to cure security risks. Governments can support this by introducing laws that make it easier to claim under contracts for poor security. 


DeFi as a solution in times of crisis

The collapse of Silicon Valley Bank in March 2023 shows that even large banks are still vulnerable to failure. But instead of requiring trust that their money is still there, Web3 users can verify their holdings directly on chain. Additionally, blockchain technology allows for a more efficient and decentralized financial landscape. The peer-to-peer network pioneered by Bitcoin means that investors can hold their own assets and transact directly with no middlemen and significantly lower fees. And unlike with traditional banks, the rise of DeFi sectors like DEXs, lending and liquid staking means individuals can now have full control over exactly how their deposited assets are used. Inflation is yet another ongoing problem that crypto and DeFi help solve. Unlike fiat currencies, cryptocurrencies like bitcoin have a fixed total supply. This means that your holdings in BTC cannot be easily diluted like if you hold a currency such as USD. While a return to the gold standard of years past is sometimes proposed as a potential solution to inflation, adopting crypto as legal tender would have a similar effect while also delivering a range of other benefits like enhanced efficiency.


Cyber resilience through consolidation part 1: The easiest computer to hack

Most cyberattacks succeed because of simple mistakes caused by users, or users not following established best practices. For example, having weak passwords or using the same password on multiple accounts is critically dangerous, but unfortunately a common practice. When a company is compromised in a data breach, account details and credentials can be sold on the dark web and attackers then attempt the same username-password combination on other sites. This is why password managers, both third-party and browser-native, are growing in utilization and implementation. Two-factor authentication (2FA) is also growing in practice. This security method requires users to provide another form of identification besides just a password — usually via a verification code sent to a different device, phone number or e-mail address. Zero trust access methods are the next step. This is where additional data about the user and their request is analyzed before access is granted. 


AI for Developers: How Can Programmers Use Artificial Intelligence?

If you write code snippets purely by hand, it is prone to errors. If you audit existing code by hand, it is prone to errors. Many things that happen during software development are prone to errors when they’re done manually. No, AI for developers isn’t completely bulletproof. However, a trustworthy AI tool can help you avoid things like faulty code writing and code errors, ultimately helping you to enhance code quality. ... AI is not 100% bulletproof, and you’ve probably already seen the headlines: “People Are Creating Records of Fake Historical Events Using AI“; “Lawyer Used ChatGPT In Court — And Cited Fake Cases. A Judge Is Considering Sanctions“; “AI facial recognition led to 8-month pregnant woman’s wrongful carjacking arrest in front of kids: lawsuit.” This is what happens when people take artificial intelligence too far and don’t use any guardrails. Your own coding abilities and skill set as a developer are still absolutely vital to this entire process. As much as software developers might love to completely lean on an AI code assistant for the journey, the technology just isn’t to that point.


The DX roadmap: David Rogers on driving digital transformation success

companies mistakenly think that the best way to achieve success is by committing a lot of resources and focusing on implementation at all costs with the solution they have planned. Many organizations get burned by this approach because they don’t realize that markets are shifting fast, new technologies are coming in rapidly, and competitive dynamics are changing swiftly in the digital era. For example, CNN decided to get into digital news after looking at many benchmarks and reading several reports, thinking subscribers will pay monthly for a standalone news app. It was a disaster and they shut down the initiative within a month. To overcome this challenge, companies must first unlearn the habit of assuming things they know that they don’t know and are trying to manage through planning. They should rather manage through experimentation. CIOs can help their enterprises in this area. They must bring what they have learned in their evolution towards agile software development over the years and help apply these rules of small teams, customer centricity, and continuous delivery to every part of the business.



Quote for the day:

"Strategy is not really a solo sport _ even if you_re the CEO." -- Max McKeown

Daily Tech Digest - September 18, 2023

The ‘Great Retraining’: IT upskills for the future

As the technology ecosystem expands, Servier Pharmaceuticals’ Yunger believes cultivating hard-to-find skill sets from within is instrumental to future-proofing the IT organization. The company, a Google Cloud Platform shop, came face-to-face with that reality when it became difficult to find specialists, shifting its emphasis to growing its own talent. Yunger takes a talent lifecycle management approach that considers the firm’s three- to five-year strategy, aligns it to the requisite IT skills, and then matches the plan to individualized development and training programs. “We provide our vision of the future to our existing team and give them an opportunity to self-select into those paths to meet our future needs,” he explains. “The better our long-term vision, the more time we have to give our team the chance to learn and grow.” The University of California, Riverside, which is undertaking a similar practice to nurture IT talent from within, makes a concerted effort to start any large-scale reskilling initiative with those most willing to embrace change. 


The double-edged sword of AI in financial regulatory compliance

As fraudsters obtain more personal data and create more believable fake IDs, the accuracy of AI models improves, leading to more successful scams. The ease of creating believable identities enables fraudsters to scale identity-related scams with high success rates. Another key area where generative AI models can be employed by criminals is during various stages of the money laundering process, making detection and prevention more challenging. For instance, fake companies can be created to facilitate fund blending, while AI can simplify the generation of fake invoices and transaction records, making them more convincing. Furthermore, by bypassing KYC/CDD checks, it’s possible to create offshore accounts that hide the beneficial owners behind money laundering schemes. Generating false financial statements becomes effortless and AI can identify loopholes in legislation to facilitate cross-jurisdictional money movements.


Growing With AI Not Against It: How To Stay One Step Ahead

The key to effectively integrating AI into your business lies in proactive engagement. Rather than being passive recipients of technological changes, businesses should take an active role in understanding AI's potential applications. Reflecting on prominent companies such as Kodak and Nokia, which once dominated their respective industries, but ultimately faltered due to their reluctance to adopt technological advancements, underscores the importance of embracing AI as a transformative force. Consider Netflix's evolution from mailing in DVDs to streaming and their use of AI algorithms to recommend personalized content to users. ... In the face of advancing AI technology, the role of leaders is not merely to keep up but to set the pace. By actively engaging with AI, embracing it as a partner, learning from mistakes, and strategically adapting our approach, we position ourselves to harness its potential to foster innovation and enable us to navigate the future with confidence.


Metaverse and Telemedicine: Creating a Seamless Virtual Healthcare Experience

Firstly, the convergence of new core technologies like blockchain, digital twins, convergence, and virtual hospitals into the Metaverse will empower clinicians to offer more integrated treatment packages and programs. Secondly, using AR and VR technologies will enhance patient experiences and outcomes. Another benefit of the Metaverse for telemedicine is that it will facilitate collaboration among healthcare professionals. The ability to share information between healthcare professionals immediately will enable quicker pinpointing of the causes of illnesses. Moreover, the Metaverse will offer new opportunities to students and trainees to examine the human body in a safe, virtual reality educational environment. Surgeons are already using VR, AR, and AI technology to perform minimally-invasive surgeries, and the Metaverse opens up new frontiers in this area. Surgeons will be able to get a complete 360-degree view of a patient’s body, allowing them to better perform complex procedures using these immersive technologies.


Adaptive Security: A Dynamic Defense for a Digital World

Adaptive security systems employ continuous monitoring to gain real-time insights into an organization's network, applications, and endpoints. This continuous data collection allows for the rapid detection of abnormal behavior and potential threats. ... Understanding the context of an activity is crucial in adaptive security. Systems analyze not only the behavior of individual elements but also the relationships between them. This context-awareness helps in distinguishing between normal and malicious activities, reducing false positives. ... Adaptive security leverages machine learning and artificial intelligence (AI) algorithms to process vast amounts of data and identify patterns indicative of threats. These algorithms can adapt and evolve their detection capabilities based on new information and emerging attack vectors. ... Automation is a core element of adaptive security. When a potential threat is detected, adaptive security systems can automatically respond by isolating affected systems, blocking suspicious traffic, or alerting security teams for further investigation. 


The Power Duo: How Platforms and Governance Can Shape Generative AI

As you catalog the tools in your organization, consider where most of your development takes place. Is it happening solely in notebooks requiring code knowledge? Are you versioning your work through a tool like Github, which is often confusing to a non-coding audience? How is documentation handled and maintained over time? Oftentimes, business stakeholders and consumers of the model are locked out of the development process because there is a lack of technical understanding and documentation. When work happens in a silo, hand-offs between teams can be inefficient and result in knowledge loss or even operational roadblocks. This leads to results that are not trusted, oreven worse, adoption of the outputs. Many organizations wait too long before leveraging business experts during the preparation and build stages of the AI lifecycle. ...  This might be because only some of the glued together infrastructure is understood by the business unit, the hand off between teams is clunky and poorly documented, or the steps aren’t clearly laid out in an understandable manner.


How India is driving tech developments at G20

While there were no major technology-related announcements, a lot of indirect spillovers can be found in discussions on artificial intelligence (AI) and crypto regulations, taking a human-centric approach to technology, digitisation of trade documents and tech-enabled development of agriculture and education. As a run-up, there were recommendations and policy actions for the business sector, including the Startup20 initiative to support startup companies and the focus on digital public infrastructure (DPI). The summit had also cast the spotlight on climate change commitments, clean energy, and sustainability development goals. Pradeep Gupta, founder of think tank Security and Policy Initiatives, noted that the emphasis on climate change initiatives at G20 would require IT to play a role in areas like equipment, data management and analytics. “Carbon credits cannot function without good AI and data technology in place,” he said. “DPI will also be a big lever for the industry.” V K Sridhar ... agreed that IT will be instrumental in driving all the climate change agreements that emerged at this G20 – both from a technology and administrative point of view. 


Executive Q&A: Developing Data-Focused Professionals

Many universities have been caught unprepared for the exploding demands in AI skills. Most educational programs are traditional (four years) and do not necessarily give students the specialized in-time skills they need for these jobs. Deloitte had an interesting article about “AI whisperers” as the job of the future, referring to enterprises’ need for employees who deeply understand machine learning algorithms, data structures, and programming languages. Such jobs are already being advertised. An institute of higher education needs to be agile enough to create concentrations and certificates that quickly provide students and existing employees with just-in-time skills. ... There is inertia, and you can argue it is by design: universities are most comfortable with a traditional four-year education. They know how to do that, and education boards that approve these programs are also comfortable with that format. However, a four-year education does not speak to all students or speak to their needs and where they are in life.


How to Become a Database Administrator

Capacity planning is a core responsibility of database administrators. Capacity planning is about estimating what resources will be needed – and available – in the future. These resources include computer hardware, software, storage, and connection infrastructure. Fortunately, planning for infrastructure-as-a-service (IaaS) is quite similar to planning for on-premise. The basic difference in planning is the additional flexibility offered by the cloud. This flexibility allows DBAs to plan for the business’s immediate needs instead of planning for needs three to four years in advance. DBAs can also make use of the cloud’s ability to quickly scale up or down to meet the client’s demands. ... The DBA must be consciously aware of the business’s changing demands and the tools offered in the various clouds. Organizing the business in preparation for surge events – such as Black Friday or the start of school in September – and using the on-demand scalability available in cloud platforms is a primary responsibility of the modern DBA. Anticipating and responding to cyclical demands or major events makes the organization much more efficient.


SSE vs SASE: What You Need to Know

The Security Service Edge (SSE) framework was also coined by Gartner, but several years later in 2021. The SSE framework retains most of the core elements of SASE. The key difference is that SSE is designed for IT environments where SD-WAN is not required. SSE fits well for networks that do not have multiple paths to reach destinations without a need for application-based routing decisions. SSE is responsible for secure web, cloud services, and application access. Some of the top business case scenarios in which SSE works best is VPN replacement for remote employees. ... Typically, those considering SSE want a purely cloud-based security platform that provides a range of security functions at the edge of the network. As with SASE, leading networking and security vendors also have SSE options. However, the cloud-native nature of SSE means it is often marketed as a single platform that can be easily deployed, managed, and scaled. For this reason, SSE will likely gain traction at organizations looking to simplify and scale security for remote workers and transition to cloud-native environments.



Quote for the day:

"Everything you want is on the other side of fear." -- Jack Canfield

Daily Tech Digest - September 17, 2023

Experiment: IT companies eager to hire self-taught pros

“Self-education can be a valuable pathway to a successful career in cybersecurity and IT,” he says. “However, it may be challenging for self-learners to gain a comprehensive understanding of complex topics without structured guidance.” He adds: “Many cybersecurity roles require certifications and degrees for the validation of skills and knowledge. And while self-learners can earn certifications through self-study, some employers may still prefer candidates with formal degrees or recognized certifications.” Traditional education often provides opportunities for networking and internships, which can also be essential to career growth. "It's a very exciting time for education right now. People who are yearning to learn have a myriad of choices. Traditional paths are no longer the only way to secure essential experience and expertise to build careers,” said Sharahn McClung, career coach at TripleTen, an online part-time coding bootcamp. She believes that self-education puts learners in the driver’s seat, and people can find what they need to fit their unique circumstances and goals.


Eliminate roles, not people: fine-tuning the talent search during times of change

When someone expresses an interest in something, whether it’s emerging tech or a new process, are they going to step up? Do they know what they claim to know? And at the end, are they excited about sharing that? If you see that passion, pick them up and put them where they want to be and you’ll have such greater morale and engagement. It really is something any organization can do; they just have to make the space for it. It’s something where any HR leader can ask an employee, “Are you doing something you’re passionate about? Is there something you want to learn more about? Would you rather grow more in your current role, or explore another facet of the business?” Ask and you’ll be amazed at the data you get from one well-crafted question. From there, you can create that talent bank that says, “Oh, Julia actually said she was really interested in mobile computing, so we’re picking you up and putting you right here.” It’s easily done and accomplished, but I’m also a big fan of demonstrating what you know. So if you’re passionate about something, you know the universal knowledge behind it.


Top Intent-Based Networking Benefits and Challenges Explained

Intent-based networking (IBN) is a software-enabled automation technique that improves network operations and uptime by combining machine learning, artificial intelligence, analytics, and orchestration. IBN allows for flexible and agile network design that optimizes the quality of service for end users, using an algorithm that automates much of the process and scales well at a low cost. While traditional approaches to network management can scale up to a certain point, they quickly run into problems as a network grows larger. IBN addresses these issues by automating processes based on intent, giving network administrators tools that make it easier to manage large networks. ... IBN architecture is guided by a high-level business policy derived from user feedback. The software then checks to see if a user’s query is doable and sends proposed setups to the network administrator for authorization. This means intent is translated into actionable plans by validating against current network constraints.
Older workers are skilled and attentive listeners and prove to be exceptional assets in the workplace due to their receptiveness to training. Their ability to grasp instructions effectively and apply them with minimal repetition is a valuable trait. ... Older talents make excellent employees due to their efficiency and the confidence they have in sharing their suggestions and ideas. Their extensive experience in various roles equips them with a deep understanding of how tasks can be executed more effectively, ultimately leading to cost savings for companies. Additionally, their years of experience have cultivated their self-assuredness, making them unafraid to communicate their insights and recommendations to management. ... Hiring older workers can lead to significant savings in labour costs. Many of them come with existing insurance coverage from previous employers or have supplementary sources of income, which makes them more open to accepting slightly lower wages for their desired positions. 


Are You a Disruptor or a Destructor? A Complete Guide to Innovation for Today's Leaders

Disruptive Innovation is a term coined by Clayton Christensen in 1997. It refers to a process where a smaller company, often with fewer resources, manages to challenge established industry leaders. The disruptors do this by targeting overlooked market segments or creating new markets altogether. Over time, these disruptors refine their products or services and start attracting a broader audience, eventually undermining the existing market leaders. ... On the flip side, Destructive Innovation refers to technologies or practices that harm or make existing models obsolete without adding significant value to the industry or consumers. ... the path you choose has profound implications for your business model, market positioning, and long-term sustainability. Whether you're a seasoned executive, a budding entrepreneur or a forward-thinking sales director, understanding these terms can help you steer your company in the direction that leads to long-term success rather than a short-lived buzz.


Platform Engineering: What’s Hype and What’s Not?

Rather than dealing a death blow to DevOps, a more accurate take is that platform engineering is the next evolution of DevOps and SRE (site reliability engineering). In particular, it benefits developers struggling with code production bottlenecks as they wait on internal approvals or fulfillment. It also helps devs deliver on their own timeline rather than that of their IT team. And it helps operator types (such as SREs or DevOps engineers) who are feeling the pain of repetitive request fulfillment and operational firefighting — busy work that keeps them from building their vision for the future. ... The agile development practices that are at the core of DevOps culture — such as collaboration, communication, and continuous improvement — have not extended to the operations domain. This has hobbled the ability of agile development teams to quickly deliver products. In order not to perpetuate this dynamic, DevOps team culture should evolve to support platform engineering, and platform teams should embrace DevOps team culture.


10 principles to ensure strong cybersecurity in agile development

Security is a team sport. Every developer needs to play their part in ensuring that code is free of security loopholes. Developers often lack the knowledge and understanding of security issues and they tend to prioritize software delivery over security matters. To empower developers, organizations must invest resources towards coaching, mentoring, and upskilling. This includes a combination of security training and awareness sessions, mentoring from senior developers, specialized agile security training events, and access to freely available resources such as OWASP, CWE, BSIMM (Building Security In Maturity Model), SAFECode, and CERT. ... It’s less costly and more efficient to bake security in from the start, rather than trying to add it after the cake comes out of the oven. Leadership must establish processes that help manage information risk throughout the entire development lifecycle. This includes agreeing on high-level application architecture from a security perspective, identifying a list of "security-critical" applications and features, performing a business impact assessment, conducting information risk and vulnerability assessments at early stages, and a process for reporting newly identified risks. 


“Embrace cybersecurity automation and orchestration, but in moderation,” says my puppy

There are three general principles to employ when using automation and orchestration to minimize these risks and maximize the gains in efficiency, cost reduction, and security effectiveness:Scale: automate at small scales, not large. Large-scale automation can be done, but is best done through incremental increases and gains over time rather than in monumental leaps and gains. Look and test: look at the blind spots that automation can cause and test actively with red teaming and purple teaming. If automation is driving analysts to investigate a certain way, occasionally send them different types of prompts or alerts or look at the data that is ignored. Check under the hood: make sure that those who are getting support and are growing their skills in the shadow of automation and orchestration understand how that happens. Encourage skepticism in the system itself in operations. Overall, automation and orchestration are both critical components of a strong cybersecurity strategy. Arguably, they may be necessary to grow in maturity and handle advanced threats at scale.


The future of private AI: open source vs closed source

When deciding which approach to take, investment is always a consideration. Developing private AI models in-house typically involves a greater investment than platform or public cloud options, as it requires businesses to fund and build a team of experts, including data scientists, data engineers and software engineers. On the other hand, taking a platform approach to private AI does not require a team of experts, which significantly reduces the complexity and cost associated with private AI deployment. Speed of deployment is another consideration. ... Another important factor to consider when choosing an AI strategy is whether to train AI using an open source AI or a closed AI model. While open source AI is pre-trained on huge sets of publicly available data, the security and compliance risks associated with this approach are significant. To mitigate risks, organisations can adopt a hybrid open source AI model, where their data is kept private but the code, training algorithms and architecture of the AI model are publicly available. Closed AI models, on the other hand, are kept private by the organisations that develop them, including the training data, AI codebase and underlying architecture. 


Domain-Driven Cloud: Aligning your Cloud Architecture to your Business Model

DDC extends the principles of DDD beyond traditional software systems to create a unifying architecture spanning business domains, software systems and cloud infrastructure. Our customers perpetually strive to align "people, process and technology" together so they can work in harmony to deliver business outcomes. However, in practice, this often falls down as the Business (Biz), IT Development (Dev) and IT Operations (Ops) all go to their separate corners to design solutions for complex problems that actually span all three. What emerges is business process redesigns, enterprise architectures and cloud platform architecture all designed and implemented by different groups using different approaches and localized languages. What’s missing is a unified architecture approach using a shared language that integrates BizDevOps. This is where DDC steps in, with a specific focus on aligning the cloud architecture and software systems that run on them to the bounded contexts of your business model, identified using DDD. 



Quote for the day:

"If you spend your life trying to be good at everything, you will never be great at anything." -- Tom Rath