Showing posts with label management. Show all posts
Showing posts with label management. Show all posts

Daily Tech Digest - July 31, 2025


Quote for the day:

"Listening to the inner voice & trusting the inner voice is one of the most important lessons of leadership." -- Warren Bennis


AppGen: A Software Development Revolution That Won't Happen

There's no denying that AI dramatically changes the way coders work. Generative AI tools can substantially speed up the process of writing code. Agentic AI can help automate aspects of the SDLC, like integrating and deploying code. ... Even when AI generates and manages code, an understanding of concepts like the differences between programming languages or how to mitigate software security risks is likely to spell the difference between the ability to create apps that actually work well and those that are disasters from a performance, security, and maintainability standpoint. ... NoOps — short for "no IT operations" — theoretically heralded a world in which IT automation solutions were becoming so advanced that there would soon no longer be a need for traditional IT operations at all. Incidentally, NoOps, like AppGen, was first promoted by a Forrester analyst. He predicted that, "using cloud infrastructure-as-a-service and platform-as-a-service to get the resources they need when they need them," developers would be able to automate infrastructure provisioning and management so completely that traditional IT operations would disappear. That never happened, of course. Automation technology has certainly streamlined IT operations and infrastructure management in many ways. But it has hardly rendered IT operations teams unnecessary.


Middle managers aren’t OK — and Gen Z isn’t the problem: CPO Vikrant Kaushal

One of the most common pain points? Mismatched expectations. “Gen Z wants transparency—they want to know the 'why' behind decisions,” Kaushal explains. That means decisions around promotions, performance feedback, or even task allocation need to come with context. At the same time, Gen Z thrives on real-time feedback. What might seem like an eager question to them can feel like pushback to a manager conditioned by hierarchies. Add in Gen Z’s openness about mental health and wellbeing, and many managers find themselves ill-equipped for conversations they’ve never been trained to have. ... There is a growing cultural narrative that managers must be mentors, coaches, culture carriers, and counsellors—all while delivering on business targets. Kaushal doesn’t buy it. “We’re burning people out by expecting them to be everything to everyone,” he says. Instead, he proposes a model of shared leadership, where different aspects of people development are distributed across roles. “Your direct manager might help you with your day-to-day work, while a mentor supports your career development. HR might handle cultural integration,” Kaushal explains. ... When asked whether companies should focus on redesigning manager roles or reshaping Gen Z onboarding, Kaushal is clear: “Redesign manager roles.”


New AI model offers faster, greener way for vulnerability detection

Unlike LLMs, which can require billions of parameters and heavy computational power, White-Basilisk is compact, with just 200 million parameters. Yet it outperforms models more than 30 times its size on multiple public benchmarks for vulnerability detection. This challenges the idea that bigger models are always better, at least for specialized security tasks. White-Basilisk’s design focuses on long-range code analysis. Real-world vulnerabilities often span multiple files or functions. Many existing models struggle with this because they are limited by how much context they can process at once. In contrast, White-Basilisk can analyze sequences up to 128,000 tokens long. That is enough to assess entire codebases in a single pass. ... White-Basilisk is also energy-efficient. Because of its small size and streamlined design, it can be trained and run using far less energy than larger models. The research team estimates that training produced just 85.5 kilograms of CO₂. That is roughly the same as driving a gas-powered car a few hundred miles. Some large models emit several tons of CO₂ during training. This efficiency also applies at runtime. White-Basilisk can analyze full-length codebases on a single high-end GPU without needing distributed infrastructure. That could make it more practical for small security teams, researchers, and companies without large cloud budgets.


Building Adaptive Data Centers: Breaking Free from IT Obsolescence

The core advantage of adaptive modular infrastructure lies in its ability to deliver unprecedented speed-to-market. By manufacturing repeatable, standardized modules at dedicated fabrication facilities, construction teams can bypass many of the delays associated with traditional onsite assembly. Modules are produced concurrently with the construction of the base building. Once the base reaches a sufficient stage of completion, these prefabricated modules are quickly integrated to create a fully operational, rack-ready data center environment. This “plug-and-play” model eliminates many of the uncertainties in traditional construction, significantly reducing project timelines and enabling customers to rapidly scale their computing resources. Flexibility is another defining characteristic of adaptive modular infrastructure. The modular design approach is inherently versatile, allowing for design customization or standardization across multiple buildings or campuses. It also offers a scalable and adaptable foundation for any deployment scenario – from scaling existing cloud environments and integrating GPU/AI generation and reasoning systems to implementing geographically diverse and business-adjacent agentic AI – ensuring customers achieve maximum return on their capital investment.


‘Subliminal learning’: Anthropic uncovers how AI fine-tuning secretly teaches bad habits

Distillation is a common technique in AI application development. It involves training a smaller “student” model to mimic the outputs of a larger, more capable “teacher” model. This process is often used to create specialized models that are smaller, cheaper and faster for specific applications. However, the Anthropic study reveals a surprising property of this process. The researchers found that teacher models can transmit behavioral traits to the students, even when the generated data is completely unrelated to those traits. ... Subliminal learning occurred when the student model acquired the teacher’s trait, despite the training data being semantically unrelated to it. The effect was consistent across different traits, including benign animal preferences and dangerous misalignment. It also held true for various data types, including numbers, code and CoT reasoning, which are more realistic data formats for enterprise applications. Remarkably, the trait transmission persisted even with rigorous filtering designed to remove any trace of it from the training data. In one experiment, they prompted a model that “loves owls” to generate a dataset consisting only of number sequences. When a new student model was trained on this numerical data, it also developed a preference for owls. 


How to Build Your Analytics Stack to Enable Executive Data Storytelling

Data scientists and analysts often focus on building the most advanced models. However, they often overlook the importance of positioning their work to enable executive decisions. As a result, executives frequently find it challenging to gain useful insights from the overwhelming volume of data and metrics. Despite the technical depth of modern analytics, decision paralysis persists, and insights often fall short of translating into tangible actions. At its core, this challenge reflects an insight-to-impact disconnect in today’s business analytics environment. Many teams mistakenly assume that model complexity and output sophistication will inherently lead to business impact. ... Many models are built to optimize a singular objective, such as maximizing revenue or minimizing cost, while overlooking constraints that are difficult to quantify but critical to decision-making. ... Executive confidence in analytics is heavily influenced by the ability to understand, or at least contextualize, model outputs. Where possible, break down models into clear, explainable steps that trace the journey from input data to recommendation. In cases where black-box AI models are used, such as random forests or neural networks, support recommendations with backup hypotheses, sensitivity analyses, or secondary datasets to triangulate your findings and reinforce credibility.


GDPR’s 7th anniversary: in the AI age, privacy legislation is still relevant

In the years since GDPR’s implementation, the shift from reactive compliance to proactive data governance has been noticeable. Data protection has evolved from a legal formality into a strategic imperative — a topic discussed not just in legal departments but in boardrooms. High-profile fines against tech giants have reinforced the idea that data privacy isn’t optional, and compliance isn’t just a checkbox. That progress should be acknowledged — and even celebrated — but we also need to be honest about where gaps remain. Too often GDPR is still treated as a one-off exercise or a hurdle to clear, rather than a continuous, embedded business process. This short-sighted view not only exposes organisations to compliance risks but causes them to miss the real opportunity: regulation as an enabler. ... As organisations embed AI deeper into their operations, it’s time to ask the tough questions around what kind of data we’re feeding into AI, who has access to AI outputs, and if there’s a breach – what processes we have in place to respond quickly and meet GDPR’s reporting timelines. Despite the urgency, there’s still a glaring gap of organisations that don’t have a formal AI policy in place, which exposes organisations to privacy and compliance risks that could have serious consequences. Especially when data loss prevention is a top priority for businesses.


CISOs, Boards, CIOs: Not dancing Tango. But Boxing.

CISOs overestimate alignment on core responsibilities like budgeting and strategic cybersecurity goals, while boards demand clearer ties to business outcomes. Another area of tension is around compliance and risk. Boards tend to view regulatory compliance as a critical metric for CISO performance, whereas most security leaders view it as low impact compared to security posture and risk mitigation. ... security is increasingly viewed as a driver of digital trust, operational resilience, and shareholder value. Boards are expecting CISOs to play a key role in revenue protection and risk-informed innovation, especially in sectors like financial services, where cyber risk directly impacts customer confidence and market reputation. In India’s fast-growing digital economy, this shift empowers security leaders to influence not just infrastructure decisions, but the strategic direction of how businesses build, scale, and protect their digital assets. Direct CEO engagement is making cybersecurity more central to business strategy, investment, and growth. ... When it comes to these complex cybersecurity subjects, the alignment between CXOs and CISOs is uneven and still maturing. Our findings show that while 53 per cent of CISOs believe AI gives attackers an advantage (down from 70 per cent in 2023), boards are yet to fully grasp the urgency. 


Order Out of Chaos – Using Chaos Theory Encryption to Protect OT and IoT

It turns out, however, that chaos is not ultimately and entirely unpredictable because of a property known as synchronization. Synchronization in chaos is complex, but ultimately it means that despite their inherent unpredictability two outcomes can become coordinated under certain conditions. In effect, chaos outcomes are unpredictable but bounded by the rules of synchronization. Chaos synchronization has conceptual overlaps with Carl Jung’s work, Synchronicity: An Acausal Connecting Principle. Jung applied this principle to ‘coincidences’, suggesting some force transcends chance under certain conditions. In chaos theory, synchronization aligns outcomes under certain conditions. ... There are three important effects: data goes in and random chaotic noise comes out; the feed is direct RTL; there is no separate encryption key required. The unpredictable (and therefore effectively, if not quite scientifically) unbreakable chaotic noise is transmitted over the public network to its destination. All of this is done at the hardware – so, without physical access to the device, there is no opportunity for adversarial interference. Decryption involves a destination receiver running the encrypted message through the same parameters and initial conditions, and using the chaos synchronization property to extract the original message. 


5 ways to ensure your team gets the credit it deserves, according to business leaders

Chris Kronenthal, president and CTO at FreedomPay, said giving credit to the right people means business leaders must create an environment where they can judge employee contributions qualitatively and quantitatively. "We'll have high performers and people who aren't doing so well," he said. "It's important to force your managers to review everyone objectively. And if they can't, you're doing the entire team a disservice because people won't understand what constitutes success." ... "Anyone shying away from measurement is not set up for success," he said. "A good performer should want to be measured because they're comfortable with how hard they're working." He said quantitative measures can be used to prompt qualitative debates about whether, for example, underperformers need more training. ... Stephen Mason, advanced digital technologies manager for global industrial operations at Jaguar Land Rover, said he relies on his talented IT professionals to support the business strategy he puts in place. "I understand the vision that the technology can help deliver," he said. "So there isn't any focus on 'I' or 'me.' Every session is focused on getting the team together and giving the right people the platform to talk effectively." Mason told ZDNET that successful managers lean on experts and allow them to excel.

Daily Tech Digest - November 09, 2023

MIT Physicists Transform Pencil Lead Into Electronic “Gold”

MIT physicists have metaphorically turned graphite, or pencil lead, into gold by isolating five ultrathin flakes stacked in a specific order. The resulting material can then be tuned to exhibit three important properties never before seen in natural graphite. ... “We found that the material could be insulating, magnetic, or topological,” Ju says. The latter is somewhat related to both conductors and insulators. Essentially, Ju explains, a topological material allows the unimpeded movement of electrons around the edges of a material, but not through the middle. The electrons are traveling in one direction along a “highway” at the edge of the material separated by a median that makes up the center of the material. So the edge of a topological material is a perfect conductor, while the center is an insulator. “Our work establishes rhombohedral stacked multilayer graphene as a highly tunable platform to study these new possibilities of strongly correlated and topological physics,” Ju and his coauthors conclude in Nature Nanotechnology


Conscientious Computing – Facing into Big Tech Challenges

The tech industry has driven incredibly rapid innovation by taking advantage of increasingly cheap and more powerful computing – but at what unintended cost? What collateral damage has been created in our era of “move fast and break things”? Sadly, it’s now becoming apparent we have overlooked the broader impacts of our technological solutions. As software proliferates through every facet of life and the scale of it increases, we need to think more about where this leads us from people, planet and financial perspectives. ... The classic Scope, Cost, Time pyramid – but often it’s the **observable ** functional quality that is prioritised. For that I’ll use a somewhat surreal version of an iceberg – as so much of technical (and effectively sustainability debt – a topic for a future blog) is hidden below the water line. Every engineering decision (or indecision) has ethical and sustainability consequences, often invisible from within our isolated bubbles. Just as the industry has had to raise its game on topics such as security, privacy and compliance, we desperately need to raise our game holistically on sustainability.


The CIO’s fatal flaw: Too much leadership, not enough management

So why does leadership get all the buzz? A cynic might suggest that the more respect doing-the-work gets, the more the company might have to pay the people who do that work, which in turn would mean those who manage the work would get paid more than those who think and charismatically express deep and inspirational thoughts. And as there are more people who do work than those who manage it, respecting the work and those who do it would be expensive. Don’t misunderstand. Done properly, leading is a lot of work, and because leading is about people, not processes or tools and technology; it’s time consuming, too. And in fact, when I conduct leadership seminars, the biggest barrier to success for most participants is figuring out and committing to their time budget. Leadership, that is, involves setting direction, making or facilitating decisions, staffing, delegating, motivating, overseeing team dynamics, engineering the business culture, and communicating. Leaders who are committed to improving at their trade must figure out how much time they plan to devote to each of these eight tasks, which is hard enough.


The Next IT Challenge Is All about Speed and Self-Service

One of the most significant roadblocks to rapid cloud adoption is sheer complexity. Provisioning a cloud environment involves dozens of dependent services, intricate configurations, security policies and data governance issues. The cognitive load on IT teams is significant, and the situation is exacerbated by manual processes that are still in place. The vast majority of engineering teams still depend on legacy ticketing systems to request IT for cloud environments, which adds a significant load on IT and also slows engineering teams. This slows down the entire operation, making it difficult for IT and engineering to support business needs effectively. In fact, in one study conducted by Rafay Systems, application developers at enterprises revealed that 25% of organizations reportedly take three months or longer to deploy a modern application or service after its code is complete. The real goal for any IT department is to support the needs of the business. Today, they do that better, faster and more cost-effectively by leveraging cloud technologies to realize all the business benefits of the modern applications being deployed.


The DPDP Act: Bolstering data protection & privacy, making India future-ready

The DPDP Act has a direct impact across industries. Organisations not only need to reassess their existing compliance status and gear up to cope with the new norms but also create a phased action plan for various processes. Moreover, if labeled as SDF, organisations also need to appoint a Data Protection Officer (DPO). In addition, organisations need to devise appropriate data protection and privacy policy framework in alignment with the DPDP Act. Further, consent forms and mechanisms have to be developed to ensure standard procedures as laid out in the legislation. Companies have to additionally invest to adopt the necessary changes in compliance with the law. They need to list down their third-party data handlers, consent types and processes, privacy notices, contract clauses, categorise data, and develop breach management processes. Sharing his perspective on the DPDP Act, Amit Jaju, Senior Managing Director, Ankura Consulting Group (India) says, “The Digital Personal Data Protection Act 2023 has ushered in a new era of data privacy and protection, compelling solution providers to realign their business strategies with its mandates. 


Will AI hurt or help workers? It's complicated

Here's what is certain: CIOs see AI as being useful, but not replacing higher-level workers. JetRockets recently surveyed US CIOs. In its report, How Generative AI is Impacting IT Leaders & Organizations, the custom-software firm found that CIOs are primarily using AI for cybersecurity and threat detection (81%), with predictive maintenance and equipment monitoring (69%) and software development / product development (68%) in second and third place, respectively. Security, you ask? Yes, security. CrowdStrike, a security company, sees a huge demand building for AI-based security virtual assistants. A Gartner study on virtual assistants predicted, "By 2024, 40% of advanced virtual assistants will be industry-domain-specific; by 2025, advanced virtual assistants will provide advisory and intervention roles for 30% of knowledge workers, up from 5% in 2021." By CrowdStrike's reckoning, AI will "help organizations scale their cybersecurity workforce by three times and reduce operating costs by close to half a million dollars." That's serious cash.


From Chaos to Confidence: The Indispensable Role of Security Architecture

Beyond mere firefighting, security architecture embraces the proactive art of strategic defense. It takes a risk-based approach to identifying potential threats, assessing weak points in an organization's IT stack, architecting forward-looking designs and prioritizing security initiatives. By aligning security investments with the organization's risk tolerance and business priorities, security architecture ensures that precious resources are optimally allocated for maximum security defense designed with in-depth zero trust security principles in mind. This reduces enterprise application deployment and operational security costs. It is similar to designing high-rise buildings in a standard manner, following all safety codes and by-laws while still allowing individual apartment owners to design and create their homes as they would prefer. Cyberattacks have become increasingly sophisticated and frequent. As a result, it is imperative for defense systems to have comprehensive, purpose-built architectures and designs in place to protect against such threats. Security architecture provides a complete defense framework by integrating various security components


Top 5 IT disaster scenarios DR teams must test

Failed backups are some of the most frequent IT disasters. Businesses can replace hardware and software, but if the data and all backups are gone, bringing them back might be impossible or incredibly expensive. Sys admins must periodically test their ability to restore from backups to ensure backups are working correctly and the restore process does not have some unseen fatal flaw. At the same time, there should always be multiple generations of backups, with some of those backup sets off site. ... Hardware failure can take many forms, including a system not using RAID, a single disk loss taking down a whole system, faulty network switches and power supply failures. Most hardware-based IT disaster scenarios can be mitigated with relative ease, but at the cost of added complexity and a price tag. One example is a database server. Such a server can be turned into a database cluster with highly available storage and networking. The cost for doing this would easily double the cost of a single nonredundant server. Administrators would also have to undergo training to manage such an environment.


Mastering AI Quality: Strategies for CDOs and Tech Leaders

Most chief data officers (CDOs) work hard to make their data operations into “glass boxes” --transparent, explainable, explorable, trustworthy resources for their companies. Then comes artificial intelligence and machine learning (AI/ML), with their allure of using that data for ever-more impressive strategic leaps, efficiencies, and growth potential. However, there’s a problem. Nearly all AI/ML tools are “black boxes.” They are so inscrutable even their creators are concerned about how they produce their results. The speed and depth at which these tools can process data without human intervention or input presents a danger to technology leaders seeking control of their data and who want to ensure and verify the quality of analytics that use it. Combine this with a push to remove humans from the decision loop and you have a potent recipe for decisions to go off the rails. ... With a human collaborator or a human-designed algorithm, it is generally easy to elicit a meaningful response to the question, “Why is this result what it is?” With AI -- and generative AI in particular -- that may not be the case.


Revamping IT for AI System Support

“It’s important for everybody to understand how fast this [AI] is going to change,” said Eric Schmidt, former CEO and chairman of Google. “The negatives are quite profound.” Among the concerns is that AI firms still had “no solutions for issues around algorithmic bias or attribution, or for copyright disputes now in litigation over the use of writing, books, images, film, and artworks in AI model training. Many other as yet unforeseen legal, ethical, and cultural questions are expected to arise across all kinds of military, medical, educational, and manufacturing uses.” The challenge for companies and for IT is that the law always lags technology. There will be few hard and fast rules for AI as it advances relentlessly. So, AI runs the risk of running off ethical and legal guardrails. In this environment, legal cases are likely to arise that define case law and how AI issues will be addressed. The danger for IT and companies is that they don’t want to be become the defining cases for the law by getting sued. CIOs can take action by raising awareness of AI as a corporate risk management concern to their boards and CEOs.



Quote for the day:

"Holding on to the unchangeable past is a waste of energy and serves no purpose in creating a better future." -- Unknown

Daily Tech Digest - February 16, 2022

Metaverse: Making it a universe for all

With a lot of speculation and little clarity about how it will work, technology companies and governments are only starting to invest in the concept. However, these investments and innovations continue to be riddled with the same concerns that various social scientists and philosophers have been asking of the promises made by the internet and social media. Set off to “democratise the good and disrupt the bad”, the internet has actively helped in the creation of international monopolies holding powers more than governments of nations. Although it has brought immense information to our fingertips, gatekeepers of knowledge still continue to profit by encouraging exclusion, our social relationships have taken a back-seat as we become increasingly affected with our identities online, and many vulnerable groups are left behind due to infrastructural inaccessibility to phones, laptops, computers, and the internet. As the world speculates to step in the metaverse in the next decade, these same questions come to the fore.


How to manage software developers without micromanaging

If software developers detest micromanaging, many have a stronger contempt for yearly performance reviews. Developers target real-time performance objectives and aim to improve velocity, code deployment frequency, cycle times, and other key performance indicators. Scrum teams discuss their performance at the end of every sprint, so the feedback from yearly and quarterly performance reviews can seem superfluous or irrelevant. But there’s also the practical reality that organizations require methods to recognize whether agile teams and software developers meet or exceed performance, development, and business objectives. How can managers get what they need without making developers miserable? What follows are seven recommended practices that align with principles in agile, scrum, devops, and the software development lifecycle and that could be applied to reviewing software developers. I don’t write them as SMART goals, but leaders should adopt the relevant ones as such based on the organization’s agile ways of working and business objectives.


Neuralink: Elon Musk's brain implant firm refutes animal abuse claims

The complaint centers on the care provided to test monkeys during and after implant and removal procedures at UC Davis. PCRM alleges that Neuralink and UC Davis staff failed to provide monkeys with adequate veterinary care, used an unapproved substance called BioGlue that killed monkeys in the experiments, and euthanized several monkeys. Details of the monkies' conditions were revealed in documents released by the university after PCRM filed a public records lawsuit in 2021. Neuralink says that during the 2.5 years at UC Davis, its tests were only conducted on cadavers or "terminal procedures", which involved the "humane euthanasia of an anesthetized animal at the completion of the surgery." "The initial work from these procedures allowed us to develop our novel surgical and robot procedures, establishing safer protocols for subsequent survival surgeries," the company says. During survival studies, Neuralink says two animals were euthanized at planned dates and six animals were euthanized at the medical advice of UC Davis veterinary staff.


Creating a more sustainable IT department

Technical debt requires more infrastructure, which results in more and more carbon emissions. In addition, technical debt requires more manual processes. For example, if we have three different ERP systems that are not integrated, it takes more manual effort just to extract the reports for financial reporting, which results in more paperwork. Technical debt accumulation results in much more latency in processes, much more manual effort, much more infrastructure -- and all that adds to our carbon footprint. Because of that we in the technology group also have an eye toward not accumulating net new technical debt. We do that, in part, by looking at how we manage our vendors so we don't end up buying similar products [for different areas of the business] as well as how we introduce new technologies into our ecosystem to avoid duplication and to ensure they can scale across the organization. ... Continued hybrid and flexible working options for our employees also helps us support reduced emissions because employees don't have to commute. We have also implemented our own business systems management platform that facilitates hot-desking in the workplace.


How Dutch hackers are working to make the internet safe

However happy the foundation was with the donation, it did lead to a slight panic. “It was just before the turn of the year, the term ‘wealth tax’ came up, so we hastily set up a fund and found an administration office to handle the annual accounts,” says Van ’t Hof. This accelerated the professionalisation of the foundation. A new structure was set up, with DIVD as the fundamental institute. “Victor Gevers was the chairman of that before, but because we wanted a different structure, that stopped and we had to look for a director. Surprisingly, everyone pointed in my direction. I took on that task,” says Van ’t Hof. Under the flag of the DIVD Institute is the fund that is meant to bundle all subsidies, donations and other money flows. “From that fund, we can finance projects that contribute to a safer internet,” explains the DIVD director. To give shape to the global ambition, a separate foundation was also set up, CSIRT.global, of which Eward Driehuis is in charge. “That foundation will set up departments in other countries so that volunteer hackers there can also help to scan and report,” says Van ’t Hof.


How to Run a Cassandra Operation in Docker

Containers thrive in a world of modern applications that demand faster delivery, better portability and seamless scalability. Gartner predicts that by 2022, more than 75% of global organizations will be running containerized applications in production. The steam behind this growing trend is none other than Docker. Docker is an open source containerization platform that lets developers package applications into containers that include everything they need to run in different environments. For enterprises, however, it can be tricky to manage individual Docker containers at scale, giving way to the popular container orchestration platform: Kubernetes (K8s). In short, Kubernetes makes it easy to deploy, manage and scale containers — and is the dominant orchestration platform used in enterprises today. This makes learning Kubernetes a must for every budding application developer; but first, you need to understand containers and Docker. In this Cassandra Operations in Docker workshop, you’ll become familiar with Docker and learn how to deploy a cloud native application in containers.


MIT Develops New Programming Language for High-Performance Computers

The ATL project combines two of the main research interests of Ragan-Kelley and Chlipala. Ragan-Kelley has long been concerned with the optimization of algorithms in the context of high-performance computing. Chlipala, meanwhile, has focused more on the formal (as in mathematically-based) verification of algorithmic optimizations. This represents their first collaboration. Bernstein and Liu were brought into the enterprise last year, and ATL is the result. It now stands as the first, and so far the only, tensor language with formally verified optimizations. Liu cautions, however, that ATL is still just a prototype — albeit a promising one — that’s been tested on a number of small programs. “One of our main goals, looking ahead, is to improve the scalability of ATL, so that it can be used for the larger programs we see in the real world,” she says. In the past, optimizations of these programs have typically been done by hand, on a much more ad hoc basis, which often involves trial and error, and sometimes a good deal of error. 


Goal-Driven Kanban

Adopting goal-driven Kanban was done in one team. Initially, the team used Scrum. Due to the nature of the business, the team had a significant percentage of tasks that were dependent on other teams and various stakeholders and thus the team continuously was not able to complete them in time. Naturally, this caused frustration, and the team decided to switch to Kanban. This cured the issue but over time, the team members started feeling that they work as a “feature factory”. They were missing challenges. Thus Goal-Driven Kanban was born. After receiving management support and agreement with Product Management, the team chose their first goal. It immediately revealed the need to re-plan other features and tasks since the team had to re-focus on the agreed goal. It required rough estimation of the goal, understanding of the team capacity and further agreements with stakeholders. While working on the goal the team had to tackle various challenges, because the bar was high and the team had to all work together doing overall design and development.


A product manager’s guide to web3

Many PMs develop skills like “communication” and “influence” at larger organizations, or even startups where they need to work closely with founders and rally overworked teams. This makes sense because persuasion and coordination have been core to the web2 PM job. Those skills don’t matter as much here. Web3 PM is more focused on execution and community—like signing a big new protocol partner or getting tons of anon users via Twitter. In web2 I was afraid to tweet much, for professional consequences. Now I’d be untrustworthy if I didn’t tweet a lot. Making a viral meme is more important than writing a good email. That is because getting positive attention in the frenetic world of web3 is more valuable than “alignment.” ... Web3 moves too quickly for pontification; new protocols launch daily and DeFi concepts like bonding curves and OHM forks are being tested in real time, so visions and strategies quickly become outdated. This may change over time as the space matures and product vision becomes more of a competitive advantage.


NIST releases software, IoT, and consumer cybersecurity labeling guidance

The order asked NIST to produce guidance for federal agency staff who have software procurement-related responsibilities and is intended to help federal agency staff know what information to request from software producers regarding their secure software development practices. The new NIST document spells out minimum recommendations for federal agencies to follow as they acquire software or a product containing software. The order also directed NIST to define actions or outcomes for software producers, such as commercial-off-the-shelf (COTS) product vendors, government-off-the-shelf software developers, contractors, and other custom software developers. ... NIST notes that its guidance is limited to federal agency procurement of software, which includes firmware, operating systems, applications, and application services, as well as products containing software. Software developed by federal agencies is out of scope, as is open-source software freely and directly obtained by federal agencies.



Quote for the day:

"Leadership is liberating people to do what is required of them in the most effective and humane way possible." -- Max DePree

Daily Tech Digest - January 03, 2022

Get the most value from your data with data lakehouse architecture

A data lakehouse is essentially the next breed of cloud data lake and warehousing architecture that combines the best of both worlds. It is an architectural approach for managing all data formats (structured, semi-structured, or unstructured) as well as supporting multiple data workloads (data warehouse, BI, AI/ML, and streaming). Data lakehouses are underpinned by a new open system architecture that allows data teams to implement data structures through smart data management features similar to data warehouses over a low-cost storage platform that is similar to the ones used in data lakes. ... A data lakehouse architecture allows data teams to glean insights faster as they have the opportunity to harness data without accessing multiple systems. A data lakehouse architecture can also help companies ensure that data teams have the most accurate and updated data at their disposal for mission-critical machine learning, enterprise analytics initiatives, and reporting purposes. There are several reasons to look at modern data lakehouse architecture in order to drive sustainable data management practices.


A CISO’s guide to discussing cybersecurity with the board

When you get a chance to speak with executives, you typically don’t have much time to discuss details. And frankly, that’s not what executives are looking for, anyway. It’s important to phrase cybersecurity conversations in a way that resonates with the leaders. Messaging starts with understanding the C-suite and boards’ priorities. Usually, they are interested in big picture initiatives, so explain why cyber investment is critical to the success of these initiatives. For example, if the CEO wants to increase total revenue by 5% in the next year, explain how they can prevent major unnecessary losses from a cyber attack with an investment in cybersecurity. Once you know the executive team and board’s goals, look to specific members, and identify a potential ally. Has one team recently had a workplace security breach? Does one leader have a difficult time getting his or her team to understand the makings of a phishing scheme? These interests and experiences can help guide the explanation of the security solution. If you’re a CISO, you’re well-versed in cybersecurity, but remember that not everyone is as involved in the subject as you are, and business leaders probably will not understand technical jargon.


Best of 2021 – Containers vs. Bare Metal, VMs and Serverless for DevOps

A bare metal machine is a dedicated server using dedicated hardware. Data centers have many bare metal servers that are racked and stacked in clusters, all interconnected through switches and routers. Human and automated users of a data center access the machines through access servers, high security firewalls and load balancers. The virtual machine introduced an operating system simulation layer between the bare metal server’s operating system and the application, so one bare metal server can support more than one application stack with a variety of operating systems. This provides a layer of abstraction that allows the servers in a data center to be software-configured and repurposed on demand. In this way, a virtual machine can be scaled horizontally, by configuring multiple parallel machines, or vertically, by configuring machines to allocate more power to a virtual machine. One of the problems with virtual machines is that the virtual operating system simulation layer is quite “thick,” and the time required to load and configure each VM typically takes some time. In a DevOps environment, changes occur frequently.


Desktop High-Performance Computing

Many engineering teams rely on desktop products that only run on Microsoft Windows. Desktop engineering tools that perform tasks such as optical ray tracing, genome sequencing, or computational fluid dynamics often couple graphical user interfaces with complex algorithms that can take many hours to run on traditional workstations, even when powerful CPUs and large amounts of RAM are available. Until recently, there has been no convenient way to scale complex desktop computational engineering workloads seamlessly to the cloud. Fortunately, the advent of AWS Cloud Development Kit (CDK), AWS Elastic Container Service (ECS), and Docker finally make it easy to scale desktop engineering workloads written in C# and other languages to the cloud. ... The desktop component first builds and packages a Docker image that can perform the engineering workload (factor an integer). AWS CDK, executing on the desktop, deploys the Docker image to AWS and stands up cloud infrastructure consisting of input/output worker queues and a serverless ECS Fargate cluster.


Micromanagement is not the answer

Neuroscience also reveals why micromanaging is counterproductive. Donna Volpitta, an expert in “brain-based mental health literacy,” explained to me that the two most fundamental needs of the human brain are security and autonomy, both of which are built on trust. Leaders who instill a sense of trust in their employees foster that sense of security and autonomy and, in turn, loyalty. When leaders micromanage their employees, they undermine that sense of trust, which tends to breed evasion behaviors in employees. It’s a natural brain response. “Our brains have two basic operating modes—short-term and long-term,” Volpitta says. “Short-term is about survival. It’s the freeze-flight-fight response or, as I call it, the ‘grasshopper’ brain that is jumping all over. Long-term thinking considers consequences [and] relationships, and is necessary for complex problem solving. It’s the ‘ant’ brain, slower and steadier.” She says micromanagement constantly triggers short-term, survival thinking detrimental to both social interactions and task completion.


Unblocking the bottlenecks: 2022 predictions for AI and computer vision

One of the key challenges of deep learning is the need for huge amounts of annotated data to train large neural networks. While this is the conventional way to train computer vision models, the latest generation of technology providers are taking an innovative approach that enables machine learning with comparatively less training data. This includes moving away from supervised learning to self-supervised and weakly supervised learnings where data availability is less of an issue. This approach, also known as few shot learning, detects objects as well as new concepts with considerably less input data. In many cases the algorithm can be trained with as little as 20 images. ... Privacy remains a major concern in the AI sector. In most cases, a business must share its data assets with the AI provider via third party servers or platforms when training computer vision models. Under such arrangements there is always the risk that the third party could be hacked or even exploit valuable metadata for its own projects. As a result, we’re seeing the rise of Privacy Enhancing Computation, which enables data to be shared between different ecosystems in order to create value, while maintaining data confidentiality.


How Automation Can Solve the Security Talent Shortage Puzzle

Supporting remote and hybrid work requires organizations to invest in and implement new technologies that facilitate needs such as remote access, secure file-sharing, real-time collaboration and videoconferencing. Businesses must also hire professionals to configure, implement and maintain these tools with an eye towards security – a primary concern here, as businesses of all sizes now live or die by the availability and integrity of their data. The increasing complexity of IT environments – many of which are now pressured to support bring-your-own-device (BYOD) policies – has only intensified the need for competent cybersecurity talent. It’s not surprising that the ongoing shortage of trained professionals makes it difficult for organizations to expand their business and adopt new technologies. Almost half a million cybersecurity jobs remain open in the U.S. alone, forcing businesses to compete aggressively to fill these roles. Yet, economic pressures make it particularly difficult for small-to mid-sized businesses (SBMs) to play this game. Most cannot hope to match the high salaries that large enterprises offer.


HIPAA Privacy and Security: At a Crossroads in 2022?

The likely expansion of OCR’s mission into protecting the confidentiality of SUD data comes as actions to enforce the HITECH Breach Notification and HIPAA Security Rule appear to be at a standstill. According to the data compiled by OCR, in 2021 there were more than 660 breaches of the unauthorized disclosure of unsecured PHI reported by HIPAA-covered entities and their business associates that compromised the health information of over 40 million people. A significant number of the breaches reported to OCR appear to show violations of the HIPAA standards due to late reporting and failure to adequately secure information systems or train workforce members on safeguarding PHI. In 2021, OCR announced settlements in two enforcement actions involving compliance with the HIPAA Security Rule standards. OCR has been mum on its approach to enforcement of the HIPAA breach and security rules. One explanation could be the impact being felt by the 5th Circuit Court of Appeals decision overturning an enforcement action against the University of Texas MD Anderson Cancer Center.


Will we see GPT-3 moment for computer vision?

It is truly the age of large models. Each of these models is bigger and more advanced than the previous one. Take, for example, GPT-3 – when it was introduced in 2020, it was the largest language model trained on 175 billion parameters. Fast forward one year, and we already have the GLaM model, which is a trillion weight model. Transformer models like GPT-3 and GLaM are transforming natural language processing. We are having active conversations around these models making job roles like writers and even programmers obsolete. While these can be dismissed as speculations, for now, one cannot deny that these large language models have truly transformed the field of NLP. Could this innovation be extended to other fields – like computer vision? Can we have a GPT-3 moment for computer vision? OpenAI recently released GLIDE, a text-to-image generator, where the researchers applied guided diffusion to the problem of text conditional image synthesis. For GLIDE, the researchers trained a 3.5 billion parameter diffusion model that uses a text encoder. Next, they compared CLIP (Contrastive Language-Image Pre-training) guidance and classifier free guidance.


What is Legacy Modernization

To achieve a good level of agility the systems supporting the organization also have to quickly react and change to the surrounding environment. Legacy systems place a constraint on agility since they are often difficult to change or provide inefficient support to business activities. This is not unusual, at the time of the system design there were perhaps technology constraints that no longer exist, or the system was designed for a particular way of working that is no longer relevant. Legacy Modernization changes or replaces legacy systems, making the organization more efficient and cost-effective. Not only can this optimize existing business processes, but it can open new business opportunities. Security is an important driver for Legacy Modernization. Cyber-attacks on organizations are common and become more sophisticated over time. The security of a system degrades over time, and legacy systems may no longer have the support or the technologies required to deter modern attack methods, this makes them an easy target for hackers. This represents a significant business risk to an organization.



Quote for the day:

"Leadership is particularly necessary to ensure ready acceptance of the unfamiliar and that which is contrary to tradition." -- Cyril Falls

Daily Tech Digest - September 04, 2021

AMD files teleportation patent to supercharge quantum computing

AMD has proposed a patent for 'teleportation,' meaning things could be about to get much more efficient around here. With the incredible technological feats humanity achieves on a daily basis, and Nvidia's Jensen going off on one last year about GeForce holodecks and time machines, it's easy for us to slip into a headspace that lets us believe genuine human teleportation is just around the corner. "Finally," you sigh, mouthing the headline to yourself. "Goodbye work commute, hello popping to Japan for an authentic Ramen on my lunch break." ... Essentially, the 'out-of-order' execution method AMD is looking to lay claim to ensures some Qubits that would be left idle—waiting for their calculation step to come around—are able to execute independent of a prior result. Where usually they would need to wait for previous Qubits to provide instructions, they can calculate simultaneously, no need to wait in line. So, no, we're not going to be zipping through wormholes just yet. But if AMD's designs come through, we could be looking at much more efficient, scalable and stable quantum computing architecture than we have now.


The Internet of Things Requires a Connected Data Infrastructure

Not long ago, a terabyte of information was an enormous amount and might be the foundation for solid decision-making. These days, it won’t cut it. For example, looking at a terabyte of data might yield a decision that’s 70% accurate. But leaving 30% to chance is unacceptable when it comes to real-time vehicle safety. On the other hand, having the ability to ingest and process 40 terabytes — from all sources, edge to core — can result in an accuracy rate well exceeding 90% accuracy. Something jumps in front of your car — is it a person, a dog, a trash bag, a child’s ball? Real-time systems need to determine the level of risk and react in micro milliseconds. Real-time processing has to be done closer to where the decisions are being made. In terms of IoT, a lot of questions can be answered by using a digital twin. These create additional layers of insights and provide a better understanding of what’s happening in any given situation and decide on the most appropriate course of immediate action. Digital twins take insight not just from the raw sensors — the edge compute nodes — but a combination of real-time data at the edge and historical data at the core.


Can Your Organization Benefit from Edge Data Centers?

Organizations considering a move to edge computing should begin their journey by inventorying their applications and infrastructure. It's also a good idea to assess current and future user requirements, focusing on where data is created and what actions need to be performed on that data. "Generally speaking, the more susceptible data is to latency, bandwidth, or security issues, the more likely the business is to benefit from edge capabilities," said Vipin Jain, CTO of edge computing startup Pensando. “Focus on a small number of pilot projects and partner with integrators/ISVs with experience in similar deployments." Fugate recommended examining business functions and processes and linking them to the application and infrastructure services they depend on. "This will ensure that there isn’t one key centralized service that could stop critical business functions," he said. "The idea is to determine what functions must survive regardless of an infrastructure or connectivity failure." Fugate also advised determining how to effectively manage and secure distributed edge platforms.

How to Speed Up Your Digital Transformation

The complexity-in-use is often overlooked in digitalization projects because those in charge think that accounting for task and system complexity independent of one another is enough. In our case, at the beginning of the transformation, tasks and processes were considered relatively stable and independent from the new system. As a result, the loan-editing clerks were unable to complete business-critical tasks for weeks, and management needed to completely reinvent their change management approach to turn the project around and overcome operational problems in the high complexity-in-use area. They brought in more people to reduce the backlog, developed new training materials, and even changed the newly implemented system — a problem-solving technique organizations with smaller budgets wouldn’t find easy to deploy. In the end, our study partner managed this herculean task, but it took them months to get the struggling departments back on track.


Ecosystems at The Edge: Where the Data Center Becomes a Marketplace

Rapidly evolving edge computing architectures are often seen as a way for businesses to enable new applications that require low latency and place computing close to the origin of data. While those are important use cases, what is less often discussed is the opportunity for businesses to leverage the edge to spawn ecosystems that generate new revenue. To realize this value, companies must think of the edge as more than just a collection point for data from intelligent devices. They should broaden their vision to see the edge as a new business hub. These small data centers can evolve into full-fledged service providers that attract local businesses, generate e-commerce transactions and enable interconnections that never touch the central cloud. Edge computing is an expansion of cloud infrastructure that moves data collection, processing and services closer to the point at which data is created or used. It is the fastest-growing segment of the cloud category with the total market expected to expand 37% annually through 2027, according to Grand View Research.


NSA: We 'don't know when or even if' a quantum computer will ever be able to break today's public-key encryption

In the NSA's summary, a CRQC – should one ever exist – "would be capable of undermining the widely deployed public key algorithms used for asymmetric key exchanges and digital signatures" – and what a relief it is that no one has one of these machines yet. The post-quantum encryption industry has long sought to portray itself as an immediate threat to today's encryption, as El Reg detailed in 2019. "The current widely used cryptography and hashing algorithms are based on certain mathematical calculations taking an impractical amount of time to solve," explained Martin Lee, a technical lead at Cisco's Talos infosec arm. "With the advent of quantum computers, we risk that these calculations will become easy to perform, and that our cryptographic software will no longer protect systems." Given that nations and labs are working toward building crypto-busting quantum computers, the NSA said it was working on "quantum-resistant public key" algorithms for private suppliers to the US government to use, having had its Post-Quantum Standardization Effort running since 2016. 

There are multiple ways that AI could become a detriment to society. Machine learning, a subfield of AI, learns from vast quantities of data and hence carries the risk of perpetuating data bias. AI use cases including facial recognition and predictive analytics could adversely impact protected classes in areas such as loan rejection, criminal justice and racial bias, leading to unfair outcomes for certain people. ... AI is only as good as the data that is used to train it. From an industry perspective, this is problematic given there is often a lack of training data for true failures in critical systems. This becomes dangerous when a wrong prediction leads to potentially life-threatening events such as manufacturing accidents or oil spills. This is why a focus on hybrid AI and “explainable AI” is necessary. ... Unfortunately, cybercriminals have historically been better and faster adopters of technology than the rest of us. AI can become a detriment to society when deepfakes and deep learning models are used as vehicles for social engineering by scammers to steal money, sensitive data and confidential intellectual property by pretending to be people and entities we trust.


Reviewing the Eight Fallacies of Distributed Computing

The challenges of distributed systems, and the broad science around the techniques and mechanisms used to build them, are now well researched. The thing you learn when addressing these challenges in the real world, however, is that academic understanding only gets you so far. Building distributed systems involves engineering pragmatism and trade-offs, and the best solutions are the ones you discover by experience and experiment. ... However, the engineering reality is that multiple kinds of failures can, and will, occur at the same time. The ideal solution now depends on the statistical distribution of failures; or on analysis of error budgets, and the specific service impact of certain errors. The recovery mechanisms can themselves fail due to system unreliability, and the probability of those failures might impact the solution. And of course, you have the dangers of complexity: solutions that are theoretically sound, but complex, might be far more complicated to manage or understand whenever an incident takes place than simpler mechanisms that are theoretically not as complete.


Machine Learning Algorithm Sidesteps the Scientific Method

We might be most familiar with machine learning algorithms as they are used in recommendation engines, and facial recognition and natural language processing applications. In the field of physics, however, machine learning algorithms are typically used to model complex processes like plasma disruptions in magnetic fusion devices, or modeling the dynamic motions of fluids. In the case of this work by the Princeton team, the algorithm skips the interim steps of needing to be explicitly programmed with the conventions of physics. “The algorithms developed are robust against variations of the governing laws of physics because the method does not require any knowledge of the laws of physics other than the fundamental assumption that the governing laws are field theories,” said the team. “When the effects of special relativity or general relativity are important, the algorithms are expected to be valid as well.” The researchers’ approach was inspired in part by Oxford philosopher Nick Bostrom’s philosophical thought experiment that the universe is actually a computer simulation.


What's the Real Difference Between Leadership and Management?

Leaders, like entrepreneurs, are constantly looking for ways to add to their world of expertise. They tend to enjoy reading, researching and connecting with like-minded individuals; they constantly aim to grow. They are usually open-minded and seek opportunities that challenge them to expand their level of thinking, which in turn leads to developing more solutions to problems that may arise. Managers, many times, rely on existing knowledge and skills by repeating proven strategies or behaviors that may have worked in the past to help maintain a steady track record within their field of success with clients. ... Leaders create trust and bonds between their mentees that go beyond expression or definition. Their mentees become raving fanatics willing to go above and beyond the usual scope of supporting their leader in achieving his or her mission. In the long run, the overwhelming support from his or her fanatics helps increase the value and credibility of the leader. On the other hand, managers direct, delegate, enforce and advise either an individual or group that typically represents a brand or organization looking for direction. Followers do as they are told and rarely ask questions. 



Quote for the day:

"Most people don't know how AWESOME they are, until you tell them. Be sure to tell them." -- Kelvin Ringold

Daily Tech Digest - August 05, 2020

Data privacy and data security are not the same

"Data privacy is, in essence, a subset of an organization's data security," Ewing said. "The distinction is important because, although the tools used to maintain data privacy and to ensure data security may overlap, the two are generally addressed differently by different teams using different tools." This overlap can cause confusion, leaving companies who focus just on data security with the false impression that, by default, data privacy also is protected. This is not the case. Unlike data security, which focuses on protecting all of an organization's data from theft or corruption (like during a ransomware attack), data privacy is more granular. To ensure data privacy, organizations must understand, track, and control things like who is authorized to access the data and where the data is stored -- in a Health Insurance Portability and Accountability Act (HIPAA)-compliant cloud, for example. A good example of differences between data privacy and data security was the harvesting of 87 million Facebook user profiles by the now-defunct political consulting firm Cambridge Analytica during the 2016-17 US presidential election, said Joshua Kail, a communications consultant who ran agency-side PR for Cambridge Analytica until it shut down in May 2018. 


State of the Art in Automated Machine Learning

Through the years of development of the machine learning domain, we have seen that a large number of tasks around data manipulation, feature engineering, feature selection, model evaluation, hyperparameter tuning can be defined as an optimization problem and, with enough computing power, efficiently automated. We can see numerous proofs for that not only in research but also in the software industry as platform offerings or open-source libraries. All these tools use predefined methods for data processing, model training, and evaluation. The creative approach to framing problems and applying new techniques to existing problems is the one that is not likely to be replicated by machine automation, due to a large number of possible permutations, complex context, and expertise the machine lacks. As an example, look at the design of neural net architectures and their applications, a problem where the search space is so ample, where the progress is still mostly human-driven. ... In theory, the entire ML process is computationally hard. From fitting data to, say, a neural network, to hyperparameter selection, to neural architecture search (NAS), these are all hard problems in the general case. However, all of these components have been automated with varying degrees of success for specific problems thanks to a combination of algorithmic advances, computational power, and patience.


How AI is Becoming Essential to Cyber-Strategy

The problem with machine learning is that the AI is limited to the features that it has been taught to expect. Fooling a machine learning security system is as simple as adding an unexpected/ unprogrammed feature into the exploit. Imagine a card trick such as “find the lady” where the machine learning software is expecting the dealer to operate inside the given parameters (the dealer is only moving around these three cards), but the dealer is cheating by having a fourth card. Because the concept of the fourth card is outside the expected features, the program can be defeated. What artificial neural networks can do is allow an AI to self-determine what features it uses to reach a conclusion. An artificial neural network still requires some degree of human input to confirm if a conclusion is incorrect, but it effectively self-organizes how it reviews and manages the data it has access to. As an example, an AI looking for new types of viruses can sense everything happening in a computer and then identify based on everything whether a program or even an activity in the memory are doing something unwelcome. It does not need to have seen the behavior before, it only has to recognize the outcome, or potential outcome.


ICML 2020 highlights: A Transformer-based RL agent, causal ML for increased privacy, and more

Microsoft researchers are in full summer swing when it comes to advancing machine learning in accessibility, privacy, healthcare, and other areas. As Microsoft Partner Research Manager and ICML President John Langford puts it, “ICML is a very broad conference, so its specialty is in some sense ‘all of the above.’” But Langford goes on to add that one of the topics that ICML has a long track record on is currently trending: reinforcement learning. A brief glance through the sessions and workshops presented by Microsoft researchers shows the wide influence reinforcement learning has in our world today, from natural language to robotics to infrastructure considerations like transportation. Beyond the research contributions, Microsoft was also a sponsor of and recruiter at the conference. Additionally, the company sponsored two events co-located with the conference, the first Women in Machine Learning Un-Workshop and the fourth Queer in AI Workshop. The impact of the conference—now and in the future—is multifaceted, according to Langford. “ICML is ‘the’ summer machine learning conference. As such, it’s critically important to the academic discovery, review, and dissemination process, a great way to meet fellow researchers, and a natural recruiting point for the field,” he says.


An open source solution for continuous testing at scale

With recent and ongoing updates, organizations can leverage Cerberus' features from development to operations. It expands digital experience test coverage by executing tests on a variety of browsers, devices, and apps. Its native connectors for APIs (including SOAP and REST), desktop applications, and Apache Kafka enable testing legacy apps, APIs, event-driven microservices, streaming services, business intelligence, data science applications, and other use cases. During the software development lifecycle, Cerberus supports fast iterations in test management, execution, and reporting. Users can create test specifications in plain English, compose tests using a library, execute in parallel on various devices, and do advanced reporting. Native integration with CI/CD solutions, such as Jenkins, Bitbucket, and others, combined with one-click ticket creation in Jira and other tools, makes bug resolution faster and easier. Cerberus can also monitor customer experience and business operations. Tests can be functional and technical, allowing organizations to test complex scenarios. For example, France's leading TV channel, TF1, uses it for quality assurance on its streaming platform.


Retrospectives for Management Teams

Good action points are the ones that propel the team forward and make them productive; I focus on quantity, quality, and the process itself. When it comes to quantity, it’s always wise to limit our commitments in order to maximize our chance of delivering them on time. Sometimes it aches the team to let go of some great ideas and not turn them into action points after a meeting. I believe it’s our duty as facilitators to increase the likelihood of a positive impact, even if it means cutting the number of initiatives we start simultaneously. When it comes to quality, in Radical Candor Kim Scott gives an easy-to-remember recipe for action points. You need to have a one-line answer to who will do what by when? If you do not have an answer on all three aspects, you don’t have an action point after all. If you follow her lead, you get a statement that is easy to act upon, easy to check if it’s being done, and easy to communicate with your stakeholders. Regarding the process, I like to encourage people to write their action items themselves - it helps to frame them in a way they understand and find easy to act upon. It helps to remember them, too.


What is an IT director? Everything you need to know about one of the top jobs in tech

The first IT professionals were employed to help their organisations manage mainframe systems. As computers became more integral to the way we work, so technology leaders – be they IT directors or CIOs – started to be appointed. IT director was the more commonly used term initially. Through the late 1990s and into the new millennium, it became customary for the top executive in a business to take the CIO moniker. While that's still often the case, it's not a hard and fast rule – many organisations still use the IT director title to describe their most senior tech executive, or use closely related titles, such as head of IT, head of technology, vice president of IT, or VP of technology. Apart from the job title, the roles are perceived to have a subtly different focus. Many big organisations now employ a CIO and an IT director. Where both executives are in situ, a split in responsibilities is likely to occur. IT directors are more likely to ensure day-to-day technology operations meet the mark, covering areas such as system uptime, service maintenance and vendor agreements. CIOs, on the other hand, are seen as the outward face of the technology department – CIOs spend less time in the data centre and more time engaging with their business peers in an attempt to understand how technology can be used to help meet their demands.


The Age of Accelerating Strategy Breakthroughs

Leading companies are also prioritizing the need to identify threats and opportunities created by megatrends that can rapidly reshape businesses. The coronavirus pandemic has shown that negative megatrends like epidemics and climate change can no longer be treated as tail risks so extreme that no preparation would make a difference. Companies have to build up resilience to safeguard profits by being prepared to play ferocious defense against other negative megatrends gathering momentum, like public debt crises, at one end of the spectrum. At the other, they must aggressively pursue new prospects created by positive megatrends like digitalization and health and wellness. Macro shifts set off by the pandemic illustrate how quickly megatrends can force companies to reset strategies. Retailers are rerouting investments earmarked for building physical locations into upgrading online commerce features and delivery services. Financial services companies are accelerating many more digital-only offerings, such as contactless payments and risk management products such as health insurance.


How to avoid cloud vendor lock-in and take advantage of multi-vendor sourcing options

Businesses recognise the benefit of utilising different suppliers and over half are now using more than one public cloud provider. According to McQuire, the moves of major cloud providers is reflecting this trend, with the launch of products like Google Cloud’s Anthos and Big Query Omni, as well as Microsoft’s Azure Arc. “Customers and developers want depth in cloud services but don’t want to be locked into a single cloud environment. Above all, they want choice when it comes to spinning up infrastructure for new applications, lift-and shift projects or maintaining consistency across their on premises, public cloud and edge environments,” he comments. McQuire warns, however, that while the market is still very early in its transition to the cloud, “care must be taken in pursuing multi-cloud approaches, so that they are not adding even more complexity to an already highly-complicated cloud computing stack. “Whilst consistency is key in multi-cloud, there will be those that do not want a lowest common denominator approach in order to support this strategy.”


How Ransomware Threats Are Evolving & How to Spot Them

"The cleverness, the creativity, and the intimate knowledge of these very, very miniscule technical details to craft a bypass like that is almost unseen in criminal malware," says Wisniewski. "It's the kind of thing we expect to see in espionage-style attacks, not in criminal attacks." Some attackers bypass technical tools by "living off the land," or using legitimate admin tools to achieve goals. Some use software deployment tools to roll out ransomware instead of delivering patches to Windows machines, Wisniewski says as an example. They may abuse PowerShell, other Microsoft tools, or so-called "gray hat" tools like Metasploit or Cobalt Strike. This behavior isn't new, Wisniewski says. "What is new is that may be the only indication you're going to get that they're in your network." Organizations may notice small, unusual things once in a while, remedy them, and close the ticket without realizing they're part of a larger incident. By the time they do, an attacker has been in their network for weeks. WastedLocker and Maze will "sit there for a month" to figure out the thing that will shut down their enterprise victim.



Quote for the day:

"Entrepreneurs must be willing to be misunderstood for long periods of time." -- Jeff Bezos