Daily Tech Digest - January 24, 2023

Status of Ethical Standards in Emerging Tech

This chasm between invention and accountability is the source of much of the angst, dismay, and danger. “It is much better to design a system for transparency and explainability from the beginning rather than to deal with unexplainable outcomes that are causing harm once the system is already deployed,” says Jeanna Matthews, professor of computer science at Clarkson University and co-chair of the ACM US Technology Committee’s Subcommittee on AI & Algorithms. To that end, the Association for Computing Machinery’s global Technology Policy Council (TPC) released a new Statement on Principles for Responsible Algorithmic Systems authored jointly by its US and Europe Technology Policy Committees in October 2022. The statement includes nine instrumental principles: Legitimacy and Competency; Minimizing Harm; Security and Privacy; Transparency; Interpretability and Explainability; Maintainability; Contestability and Auditability; Accountability and Responsibility; and Limiting Environmental Impacts, according to Matthews.


Best practices for devops observability

A selected group of engineers may have the lead responsibilities around software quality, but they will need the full dev team to drive continuous improvements. David Ben Shabat, vice president of R&D at Quali, recommends, “Organizations should strive to create what I would call ‘visibility as a standard.’ This allows your team to embrace a culture of end-to-end responsibility and maintain a focus on continuous improvements to your product.” One way to address responsibility is by creating and following a standardized taxonomy and message format for logs and other observability data. Agile development teams should assign a teammate to review logs every sprint and add alerts for new error conditions. Ben Shabat adds, “Also, automate as many processes as possible while using logs and metrics as a gauge for successful performance.” Ashwin Rajeev, cofounder and CTO of Acceldata, agrees automation is key to driving observable applications and services. He says, “Modern devops observability solutions integrate with CI/CD tools, analyze all relevant data sources, use automation to provide actionable insights, and provide real-time recommendations.


Why leveraging privacy-enhancing tech advances consumer data privacy and protection

Historically, proprietary privacy-enhancing technologies have been developed by location technology companies and used internally. However, it’s my firm belief that for organizations of all types to truly progress toward the level of consumer data privacy people want and expect, privacy-enhancing technologies created by location technology companies should be made available to all companies that could benefit from these advancements. ... These tools help add industry-leading privacy controls to a company’s own systems and work with any kind of location data, no matter how it is generated. This helps ensure that a company is meeting privacy requirements and protecting consumer data. If more technology companies made the privacy-enhancing features used in their own systems available to other companies, organizations across industries could better protect the data stored in their systems, and in turn, consumer data privacy and protection is likely to progress and improve more quickly. A crucial starting point is democratizing access to these technologies.


What’s To Come In 2023? Modern Frameworks, CISO Elevation & Leaner Security Stacks

The past year has shown the effects that whistleblowing (Twitter) can have when an organization ignores its employees flagging activity they consider fraudulent, unsafe, or illegal. But over the past year, we have also seen the consequences when CISOs actively ignore or hide security issues. For example, in the Uber situation, we saw for the first time criminal charges filed and then later a conviction. These contrasting stories create a potential no-win situation for CISOs who, on the one hand, may be ignored for calling out issues or could face jail time if they actively turn a blind eye (and/or hide) them. ... With the beginning of 2023 fraught with enormous economic and regulatory uncertainty, we will likely see a consolidation of tools and a greater focus on which tools are necessary. The nature of tech is that many organizations adopt tools to fix immediate problems, and often these tools have overlapping functionality and use cases. Although security budgets are likely to be a bit safer than other departments in a business, security teams will still need to consider what they must have to be successful with fewer resources. 


The Benefits of an API-First Approach to Building Microservices

APIs have been around for decades. But they are no longer simply “application programming interfaces”. At their heart APIs are developer interfaces. Like any user interface, APIs need planning, design, and testing. API‑first is about acknowledging and prioritizing the importance of connectivity and simplicity across all the teams operating and using APIs. It prioritizes communication, reuseability, and functionality for API consumers, who are almost always developers. There are many paths to API‑first, but a design‑led approach to software development is the end goal for most companies embarking on an API‑first journey. In practice, this approach means API are completely defined before implementation. Work begins with designing and documenting how the API will function. ... In the typical enterprise microservice and API landscape, there are more components in play than a Platform Ops team can keep track of day to day. Embracing and adopting a standard, machine‑readable API specification helps teams understand, monitor, and make decisions about the APIs currently operating in their environments.


MITRE ATT&CK Framework: Discerning a Threat Actor’s Mindsetm

Many security solutions offer a wide range of features to detect and track malicious behavior in containers. Defense evasion techniques are meant to obfuscate these tools so that everything the bad actor is doing seems legitimate. One example of defense evasion includes building the container image directly on the host instead of pulling from public or private registries. There are also evasion techniques that are harder to identify, such as those based on reverse forensics. Attackers use these techniques to delete all logs and events related to their malicious activities so that the administrator of a security, security information and event management (SIEM), or observability, tool has no idea that an unauthorized event or process has occurred. To protect against defense evasion, you’ll need a container security solution that detects malware during runtime and provides threat detection and blocking capabilities. Two examples of this would be runtime threat defense to protect against malware and honeypots to capture malicious actors and activity.


CIO role: 5 strategies for success in 2023

CIOs must adapt to the changing business landscape brought on by the pandemic. With many organizations embracing hybrid work, the internet plays a more prominent role in the overall network strategy. Ensure that your systems and processes are optimized for this new reality. This includes prioritizing the user experience of remote workers and implementing better end-user experience monitoring to ensure that they can be productive and collaborate effectively.  ... As organizations increasingly adopt multi-cloud systems to manage their IT infrastructure, CIOs must be able to navigate the complexity of these environments effectively. One approach is implementing a seamless strategy across all major clouds to streamline management and reduce complexity. Consider how you can optimize performance and apply security uniformly across your multi-cloud estate. Also, be mindful of the changing regulatory and compliance landscape and look for cloud services with built-in compliance features to minimize the burden on your teams.


How passkeys are changing authentication

The latest in FIDO passkeys specs are multi-device. Once a passkey is established for a given service, the same device can be used to securely share it with another device. The devices must be in close proximity, within range of wirelessly connecting, and the user takes an active role in verifying the device sync. The remote cloud service for the given device also plays a role. That means that an iPhone uses Apple's cloud, an Android device uses Google Cloud Platform (GCP), and Windows uses Microsoft Azure. Efforts are underway to make sharing passkeys across providers simpler. It's a rather manual process to share across providers, for example, to go from an Android device to a MacOS laptop. Passkeys are cryptographic keys, so gone is the possibility of weak passwords. They do not share vulnerable information, so many password attack vectors are eliminated. Passkeys are resistant to phishing and other social engineering attacks: the passkey infrastructure itself negotiates the verification process and isn’t fooled by a good fake website -- no more accidentally typing a password into the wrong form.


CIOs sharpen tech strategies to support hybrid work

With competition for talent still tight and pressure on organizations to maximize employee productivity, Anthony Abbatiello, workforce transformation practice leader at professional services firm PwC, says CIOs should focus on what and how they can improve the hybrid experience for users. He advises CIOs to partner with their counterparts in HR to identify the worker archetypes that exist in their organizations to understand how they work and what they need to succeed. “CIOs should be asking how to create the right experience that each worker needs and what do they need to be productive in their job,” Abbatiello says. “Even if you’ve done that before, the requirements of people in a hybrid environment have changed.” Hybrid workers today are looking for digital workplace experiences that are seamless as they move between home and office, Abbatiello says. This include technologies that enable them to replicate in cyberspace the personal connections and spontaneous collegiality that more easily happen in person, as they seeking experiences that are consistent regardless of where they’re working on any given day.


Platform Engineering 101: What You Need to Know About This Hot New Trend

Before platform teams can start building their product, they need to define a clear mission statement to guide the process. This mission statement should fit the overall goals of the organization and proactively define the role of the platform team within the organization. It should also inspire your engineers. Hashicorp’s Director of Platform Engineering Infrastructure Michael Galloway summarizes this well: “it should be emotional and inspiring. … It should be simple but meaningful.” You can start by defining your goals. This could encompass things like enabling the required degree of developer self-service without adding cognitive load or achieving the desired reduction of tickets that go to ops without forcing developers to learn infrastructure-centric technologies end-to-end. After this, you’ll probably wind up with something like: “Our mission is to standardize workflows to improve the developer experience, speed up innovation cycles, and shorten time to market for the engineering organization.” It’s descriptive but not inspiring. Refine your mission statement to strike a good balance. For example: “Our mission is to build a platform developers love because it lets them innovate faster.”



Quote for the day:

"Leadership does not always wear the harness of compromise." -- Woodrow Wilson

Daily Tech Digest - January 23, 2023

CIOs step up to fill plus-size leadership roles

While it’s common for others in the C-suite to take on so-called plus-roles, the CIO has historically worn multiple hats without the benefit or glory of a formal title marking their expanded terrain. The successful shift to remote work and accelerated digital transformation during the pandemic years elevated CIOs’ stature, earning trust and recognition from business leaders that was absent in the past. With technology now a pervasive enabler of all aspects of the business, CIOs have the end-to-end visibility and cross-functional insights that other executives lack, not to mention oversight of key enterprise assets such as data and analytics. “We always had business leaders take on other roles, but the CIO hasn’t been considered for one of those pluses — they were always rolled up underneath another executive that was,” says John Hill, senior vice president and chief digital information officer at MSC Industrial Supply, where he heads up digital strategy, ecommerce, marketing, vending solutions, metalworking innovation, and advanced analytics. 


ChatGPT Prompts for Agile Practitioners

“One important thing to keep in mind is that conflict is a natural part of any team and should be viewed as an opportunity for growth and improvement. As a Scrum Master, it's important to create an environment where team members feel comfortable discussing and resolving conflicts in a constructive manner. It's also important to be aware that conflicts can often be the result of deeper underlying issues, such as lack of trust, poor communication, or unmet needs. As a Scrum Master, it's important to be aware of these underlying issues and work with the team to address them. Additionally, it's important to be aware of the team's dynamics and to be mindful of the impact that conflicts can have on individuals and the team as a whole. As a Scrum Master, it's important to be aware of any team members who may be more affected by the conflict and to provide them with additional support as needed. Finally, it is important to be aware of the impact of external factors on the team, such as changes in the organizational structure or market conditions, and to take them into account when addressing conflicts and underlying issues.


3 predictions for open source in confidential computing

Confidential computing is the practice of isolating sensitive data and the techniques used to process it. This is as important on your laptop, where your data must be isolated from other applications, as it is on the cloud, where your data must be isolated from thousands of other containers and user accounts. As you can imagine, open source is a significant component for ensuring that what you believe is confidential is actually confidential. This is because security teams can audit the code of an open source project. ... In the past year, a lot of discovery and educational activities were developed. Confidential computing is now better known, but it has yet to become a mainstream technology. The security and developer communities are gaining a better understanding of confidential computing and its benefits. If this discovery trend continues this year, it can influence more outlets, like conferences, magazines, and publications. This shows that these entities recognize the value of confidential computing. In time, they may start to offer more airtime for talks and articles on the subject.


Exploring Cloud-Native Acceleration of Data Governance

Any seasoned data governance practitioner knows that implementing data management is only for those with extraordinary perseverance. Identifying and documenting data applications and flows, measuring and remediating data quality, and managing metadata — despite a flood of tools that attempt to automate many data management activities, these typically remain manual and time-consuming. They are also expensive. At leading organizations, data governance and remediation programs can top $10 million per year; in some cases, even $100 million. Data governance can be seen as a cost to the organization and a blocker for business leaders with a transformation mandate. So, here appears our “so-what.” If data governance could be incorporated directly into the fibers of the data infrastructure while the architectural blocks are being built and connected, it would dramatically reduce the need for costly data governance programs later. We will review three essential design features that, once incorporated, ensure that data governance is embedded “by design” rather than by brute manual force.


Crossplane: A Package-Based Approach to Platform Building

While Crossplane allows users to install many packages alongside one another, a common pattern has emerged of defining a single “root” package, which is an ancestor node of every other package that gets installed in a control plane. Doing so makes the entire API surface area reproducible by installing that one package. As an organization grows and evolves over time, that package can be expanded by defining new APIs or establishing new dependencies. ... Furthermore, because using a root package is a convention rather than a technical constraint, the property of composability is not violated. Taking the previously mentioned MySQL Configuration package for example, a user may install it as the root package of their MySQL database control plane, while another user may depend on it as only one component of a much larger API surface for use cases like internal cloud platforms. Declaring a Vendor Dependency While the attributes of Crossplane packages enable platform builders to quickly add functionality to their control planes, they also present a unique distribution mechanism for vendors. 


The loneliness of leading a cybersecurity startup

When building something unprecedented and game-changing, the course and rules are steeped in darkness and uncertainty, with naysayers, critics, board members, competitors and time itself hurling criticisms every step of the way. Why on earth would anyone subject themselves to that? Perhaps it should be made clear that, for most of the aspiring CEOs I meet, their career path is hardly a choice. Entrepreneurship often runs in their blood, serving as a defining force for the people who pursue it. This all-consuming nature has become a necessary qualifier for most investors; It’s an important tool for survival, as well as for success. This is not to discount the absolute necessity of vision, inspiration and faith in a good idea. But on the long road to entrepreneurial success, these are often subject to so much scrutiny and judgement that a different strength is necessary to stay the course. These days, tightening budgets and dreaded potential layoffs only add to the pressure they feel. However, during the toughest times, an entrepreneur’s hunger to build can provide the critical momentum they need to move forward.


IT hiring: How to find the right match

A strong company mission is not only crucial to attracting talent, but it will also motivate existing employees to do their work well. I am motivated to push through the complex problems I face in my work, for example, because I know that my company is solving some of the most significant issues in the healthcare industry. My work ultimately brings a better experience to hospital staff and patients, which leads to improved patient outcomes. In every organization, it is crucial for executives and board members to focus on agreed-upon goals and to share with employees how the company is meeting them regularly. Furthermore, employees seek a collaborative environment where they work together toward a common goal. ... Intelligent, scrappy workers are often attracted to startups, allowing ambitious employees to try many hats and build new skills quickly. I am fascinated by the technical and intellectual aspects of my job and find them highly rewarding. My teammates and I don’t know all the answers as we work with new models, so we must test our hypotheses and rethink our approach.


Pentagon must act now on quantum computing or be eclipsed by rivals

Many experts, including Spirk, believe that military applications for quantum computing could be less than 10 years away. Case in point: according to the Pentagon’s annual report on Chinese military power, China recently designed and fabricated a quantum computer capable of outperforming a classical high-performance computer for a specific problem. This is also why DARPA announced the ‘Underexplored Systems for Utility-Scale Quantum Computing’ (US2QC) program to explore potentially overlooked methods by which quantum computers could achieve practical levels of utilization much faster than current predictions suggest. The White House recently signed the Quantum Computing Cybersecurity Preparedness Act into law, signaling that it regards quantum as a serious issue. The act addresses the migration of executive agencies’ IT systems to post-quantum cryptography (PQC) - encryption which is secure from attacks by quantum computers because of the advanced mathematics underpinning it.


How Will the AI Bill of Rights Affect AI Development?

The AIBoR states that AI systems should be transparent and explainable, and not discriminate against individuals based on various protected characteristics. “This would require AI developers to design and build transparent and fair systems and carefully consider their systems’ potential impacts on individuals and society,” explains Bassel Haidar, artificial intelligence and machine language practice lead at Guidehouse, a business consulting firm. Creating transparent and fair AI systems could involve using techniques, such as feature attribution, that can help identify the factors that influenced a particular AI-driven decision or prediction, Haidar says. “It could also involve using techniques such as local interpretable model-agnostic explanations (LIME), which can help to explain the decision-making process of a black-box AI model.” AI developers will have to thoroughly test and validate AI systems to ensure that they function properly and make accurate predictions, Haidar says. “Additionally, they will need to employ bias detection and mitigation techniques to help identify and reduce potential biases in the AI system.”


The metaverse brings a new breed of threats to challenge privacy and security gatekeepers

While security experts point to authentication and access controls to protect against metaverse-based scams and attacks, the growing number of platforms providing access to the metaverse may or may not have secure mechanisms for recognizing frauds, says Paul Carlisle Kletchka, governance, risk, and compliance (GRC) analyst with Lynx Technology Partners, a provider of GRC services. “One of the major vulnerabilities is the lack of standardized security protocols or mechanisms in place across the platforms,” he says. “As a result, cybercriminals can use the metaverse for a variety of purposes such as identity theft, fraud, or malicious attacks on other users. Since people can download programs and files from within the metaverse, there is also a risk that these files could contain malware that could infect a user's computer or device and spread back into the organization’s systems. Another threat is piracy: since the metaverse is still in its early stages of development, there are no laws or regulations written specifically for the metaverse to protect intellectual property within this digital environment.



Quote for the day:

"A leader is judged not by the length of his reign but by the decisions he makes." -- Klingon Proverb

Daily Tech Digest - January 22, 2023

Top Humanoid Robots Innovations So Far

Sophia is a social humanoid robot. She was activated in February 2016 and made her first public appearance at South by Southwest Festival in mid-March 2016 in Austin, TX. Since its launch, Sophia has garnered a lot of media coverage, featuring numerous high-profile interviews, events, and panel discussions across the world. ... Toyota T-HR3 is a third-generation humanoid robot, which was designed from the get-go to be remote-controlled by a human. It is 1.5-meter tall, weighs 75 kilograms, and has torque-controlled freedom of 32 degrees with a pair of 10 fingered hands. The robot is designed to be a platform with capabilities that can safely assist people in a different variety of settings like home, medical facilities, disaster-stricken areas, construction sites, and outer space. ... E2-DR is a disaster response robot from Honda that is able to navigate through dangerous, complex environments. The robot looks like a humanoid, and heavier and tougher than the company’s Asimo, first presented in 2000. Honda E2-DR is designed to perform as a rescuer in a broad range of situations dangerous for human rescuers 


OpenAI, ChatGPT and the intensifying competition for data management within the supercloud

What many industry analysts are seeing, much to the chagrin of large data/search players like Google, is that OpenAI has leaped to the forefront of providing the capabilities to handle the data requirements of the supercloud. A lot of this is due to the concentrated capabilities within ChatGPT born from tedious underlying work, such as the training of machine learning models, according to Xu. As a result, companies need to be proactive enough to see the AI technologies as critical to a supercloud future instead of just being in the count while leaving AIOps on the back burner. For most of the Fortune 500 companies, your job is to survive the big revolution,” Xu said. “So you at least need to do your walmart.com sooner than later and not be like GE with a lot of the hand-waving.” Microsoft, for its part, has shown some of that foresight, as it’s recently invested around $10B into OpenAI and worked with the company across several areas, including its OpenAPI services. 


Overcoming Challenges in Privacy Engineering

The bigger the company, the greater the likelihood that there’ll be considerable amounts of legacy code lurking in the depths of the organization’s systems. Very few developers properly understand legacy code, so it’s usually highly opaque. Some employees might know the connections for some of the lines of code, and some sections might have been replaced more recently, but in general there’s very poor visibility into which services are related to which database, which services are sharing data with which other services, and other aspects of legacy code. On top of all this, data mapping projects are caught in a tech version of Zeno’s paradox. Most of the projects that are being mapped are live projects, which means that more data, more tables, and more connections are being added on a continual basis. But most data mapping is currently carried out manually. The map is out of date as soon as it’s completed, because of the speed at which live projects expand. There’s no way that human employees can keep up with the pace at which new data and relationships are added to the project. 


Cisco Report Shows Cybersecurity Resilience as Top of Mind

The report delves into the factors that could provide the biggest gains in enterprise security resilience, whether based on culture, IT environment, or security technology. Cisco took these factors and devised a security resilience scoring system based on seven areas. Those most closely adhering to these core principles are in the top 10% of resilient businesses. Those missing most of these elements are in the bottom 10%. Culture is especially vital. Those with poor security support from the C-suite score 39% lower than those with strong executive support. Similarly, those with a thriving security culture score 46% higher than those lacking it. But it isn’t all about culture. Staffing, too, played a definite role, whether based on experienced staff, certification and training, or the sheer number of internal resources. The report shows those companies maintaining extra internal staffing and resources to respond to incidents gain a 15% boost in resilient outcomes. In other words, headcount can mean the difference between faring well and poorly during an event. 


How distributed architecture can improve the resilience of your organization

Distributed architecture is not exactly a new thing to the average IT department, but organizations aren’t always aware of all the benefits that it provides – things like improved scalability, performance, cost savings and resiliency. ... Cost savings is a common driver for establishing a distributed architecture. By setting up multiple nodes, you can route traffic through the nearest node instead of relying on more simple call trafficking rules - like all participants connect to the node closest to the first person to join the call. Bandwidth consumption on WAN networks can be very expensive, with transatlantic costs especially high. With Pexip, nodes can be placed within your internal network to reduce the cost of the traffic on WAN networks. An added cost-saving feature from Pexip comes from our media transcoding. Media streams coming back from Pexip are reduced in size as they travel between nodes. Since Pexip handles the compute, you’re left with a more efficient media traffic flow that costs less. Distributed architecture means that your entire deployment is more resilient. 


Hardening The Last Line Of Defence For Financial Organizatins

IT infrastructure and security operations teams live in two worlds that are often separated by design. Whilst the SecOps teams want to regulate all access as strictly as possible, the IT infrastructure teams need to be allowed to access all important systems for backup. Many of these teams are not collaborating as effectively as possible to address growing cyber threats, as a recent survey found out. Those respondents who believe collaboration is weak between IT and security, nearly half of respondents believe their organisation is more exposed to cyber threats as a result. For true cyber resilience these teams must work closely together, as the high number of successful attacks proves that attack vectors are changing and it’s not just about defence, but backup and recovery. ... If financial organisations want to achieve real cyber resilience and successfully recover critical data even during an attack, they will have to modernise their backup and disaster recovery infrastructure and migrate to modern approaches such as a next-gen data management platform.


Effective business continuity requires evolution and a plan

IT and cybersecurity teams can work with other business decision-makers to assess risk levels for each system. This involves comparing the organization's business model against the IT infrastructure to determine which systems are mission-critical to operations. During the risk analysis, key considerations -- such as whether the organization can survive without email for a week, what systems are regularly backed up and what systems are cloud-based vs. on premises -- should be weighed and addressed. Organizations may want to assign tiers to each system to define which ones must be restored the fastest. It's often the safest course to colocate critical systems or keep certain backup systems offline. Ensure the colocation isn't connected to the corporate network via Active Directory and that it's segmented from other systems, as compromises can occur if the colocation is the primary environment for data storage and has a connection to the corporate network. Colocation lets organizations bring the most essential systems back online and continue operations, even if core systems have been breached or otherwise disrupted.


Four Steps To Self-Service Data Governance Success

Data governance can help teams oversee and control access to confidential information. You could unlock automation for data security faster with a no-code/low-code approach. A no-code approach could make self-service data governance easier by handling all of the complicated things behind a simple interface. Your data teams won't have to write hundreds of lines of code to handle complex, repetitive procedures like applying granular access policies to many users simultaneously. To simplify your transition to no-code, start with a pilot. Look for no-code/low-code technology that lets you move quickly into implementation. Prioritize options that let you sign on to the service in minutes without requiring long-term contracts. Then, connect your cloud database and control access with classification-based policies that don't require your team to write code to allow only approved users to view the data. When the situation calls for more customization, like trying to see who has access to your cloud database, test the low-code capability. ... A no-code/low-code capability could make the job of managing data governance infinitely easier.


Top 5 Considerations for Better Security in Your CI/CD Pipeline

Securing running microservices is just as crucial to an effective CI/CD security solution as is preventing application breaches by moving security to the pipeline’s earlier stages. The context necessary to comprehend Kubernetes structures — such as namespace, pods and labels — is not provided by conventional next-generation firewalls (NGFW). Once the perimeter has been compromised, the risk of implicit trust and flat networks on thwarting external attacks provides attackers a great deal of surface. As a result, it’s important to leverage a platform that enables continuous security and centralized policy and visibility for efficient and effective continuous runtime security. The majority of application teams automate their build process using build tools like Jenkins. Security solutions must be included in popular build frameworks to bring security to a build pipeline. Such integration enables teams to pick up new skills quickly and pass or fail builds depending on the requirements of their organization. 


4 High-Impact Data Quality Issues That Are Easily Avoidable

In the modern data stack, data quality issues can range from semantic and subjective – which are hard to define – to operational and objective, which are easy to define. For instance, objective and easier-to-define issues would be data showing up with empty fields, duplicate transactions being recorded, or even missing transactions. More concrete, operational issues could be data uploads not happening on time for critical reporting, or a data schema change that drops an important field. Whether a data quality issue is highly subjective or unambiguously objective depends on the layer of the data stack it originates from. A modern data stack and the teams supporting it are commonly structured into two broad layers: 1) the data platform or infrastructure layer; and, 2) the analytical and reporting layer. The platform team, made up of data engineers, maintains the data infrastructure and acts as the producer of data. This team serves the consumers at the analytical layer ranging from analytics engineers, data analysts, and business stakeholders.



Quote for the day:

"Don't be buffaloed by experts and elites. Experts often possess more data than judgement." -- Colin Powell

Daily Tech Digest - January 21, 2023

Is Your Innovation Project Condemned To Succeed?

The challenge in most organizations is that leaders are looking to make big bets on a few projects. These bets are typically based on asking innovation teams to create a business case before they receive investment. A business case showing good returns will receive investment with the expectation that it will succeed. The team is given no room for failure. ... This problem is exacerbated if your team has received a large investment to work on the project. Most innovation teams lose the discipline to test their ideas if they have large budgets to spend. In most cases they burn through the money while executing on their original idea. By the time they learn that the idea may not work, they have already spent millions of dollars. At this point, admitting failure is career suicide. ... Imagine being the CEO’s pet project, having a large investment and then being publicly celebrated as a lighthouse project before you have made any money for the company. This public celebration of a single innovation project puts a lot of pressure on innovation teams to succeed. 


Which cloud workloads are right for repatriation?

Look at the monthly costs and values of each platform. This is the primary reason we either stay put on the cloud or move back to the enterprise data center. Typically the workload has already been on the cloud for some time, so we have a good understanding of the costs, talent needed, and other less-quantifiable benefits of cloud, such as agility and scalability. You would think that these are relatively easy calculations to make, but it becomes complex quickly. Some benefits are often overlooked and architects make mistakes that cost the business millions. All costs and benefits of being on premises should be considered, including the cost of the humans needed to maintain the platforms (actual hardware and software), data center space (own or rent), depreciation, insurance, power, physical security, compliance, backup and recovery, water, and dozens of other items that may be specific to your enterprise. Also consider the true value of agility and scalability that will likely be lost or reduced if the workloads return to your own data center.


Network automation: What architects need to know

It's great to strive for an automation-first culture and find innovative ways to use technology as a competitive advantage, but I recommend first targeting low-risk, high-reward tasks. Try to create reusable building blocks to operate more efficiently. One example is automating the collection and parsing of operational data from the network, such as routing protocol sessions state, VPN service status, or other relevant metrics to produce actionable or consumable outputs. Gathering this information is a read-only activity, so the risk is low. The reward is high because this task is a time-consuming, repetitive process. Also, you can use this data for various purposes, such as creating reports, running audits, filling in trouble tickets, performing pre-and post-checks during maintenance windows, and so on. You don't need to wait until you get everything right to start. Improve on your automation solution iteratively. Small initial steps can make a big difference in your network. For example, for the data collection example above, you don't need the full list of key performance indicators (KPIs) on day 1; your users will let you know what you're missing over time.


Finding Adequate Metrics for Outer, Inner, and Process Quality in Software Development

Quite an obvious criteria for outer quality is the question of if the users like the product. If your product has customer support, you could simply count the number of complaints or contacts. Additionally, you can categorize these to gain more information. While this is in fact a lot of effort and far from trivial, it is a very direct measure and might yield a lot of valuable information on top. One problem here is selection bias. We are only counting those who are getting in contact, ignoring those who are not annoyed enough to bother (yet). Another similar problem is survivorship bias. We ignore those users who simply quit due to an error and never bother to get in contact. Both biases may lead us to over-focus on issues of a complaining minority, while we should rather further improve things users actually like about the product. Besides these issues, the complaint rate can also be gamed: simply make it really hard to contact customer support by hiding contact information or increase waiting time in the queue.


Platform Engineering Won’t Kill the DevOps Star

“The movement to ‘shift left’ has forced developers to have an end-to-end understanding of an ever-increasing amount of complex tools and workflows. Oftentimes, these tools are infrastructure-centric, meaning that developers have to be concerned with the platform and tooling their workloads run on,” Humanitec’s Luca Galante writes in his platform engineering trends in 2023, which demands more infrastructure abstraction. Indeed, platform engineering could be another name for cloud engineering, since so much of developers’ success relies on someone obscuring the complexity out of the cloud — and so many challenges are found in that often seven layers thick stack. Therefore you could say platform engineering takes the spirit of agile and DevOps and extends it within the context of a cloud native world. She pointed to platform engineering’s origins in Team Topologies, where “the platform is designed to enable the other teams. The key thing about it is kind of this self-service model where app teams get what they want from the platform to deliver business value,” Kennedy said.


The Concept of Knowledge Graph, Present Uses and Potential Future Applications

A knowledge graph is a database that uses a graph structure to represent and store knowledge. It is a way to express and organize data that is easy for computers to understand and reason about and which can be used to perform tasks such as answering questions or making recommendations. The graph structure consists of nodes, which represent entities or concepts, and edges, which represent relationships between the nodes. For example, a node representing the concept "Apple" might have advantages over nodes representing the concepts "Fruit," "Cupertino, California," and "Tim Cook," which represent relationships such as "is a type of," "is located in," and "has a CEO of," respectively. In a knowledge graph, the relationships between nodes are often explicitly defined and stored, which allows computers to reason about the data and make inferences based on it. This is in contrast to traditional databases, which store data in tables and do not have direct relationships between the data points.


4 tips to broaden and diversify your tech talent pool

Apprenticeships are extremely valuable for both employers and candidates. For employers, apprenticeships are a cost-effective way to groom talent, providing real-world training and a skilled employee at the end of the program. Apprenticeship programs also reduce the ever-present risk of hiring a full-time entry-level employee, who may prove to not be up to the required standard or decide for themselves that the organization or industry is not a fit. For workers, an apprenticeship is essentially a crash course providing the opportunity to earn while they learn. With the average college graduate taking on $30,000 in debt (and many taking on much more), a degree has increasingly become out of financial reach for many Americans. Apprenticeships are an excellent way for people to gain tangible work experience and applicable skills while also providing a trial run to determine whether a career in cybersecurity is right for them. For me, apprenticeship programs are a true win-win. During National Apprenticeship Week this year, we joined the Department of Labor’s event at the White House to celebrate the culmination of the 120-day Cybersecurity Apprenticeship Sprint. 


Debugging Threads and Asynchronous Code

Let’s discuss deadlocks. Here we have two threads each is waiting on a monitor held by the other thread. This is a trivial deadlock but debugging is trivial even for more complex cases. Notice the bottom two threads have a MONITOR status. This means they’re waiting on a lock and can’t continue until it’s released. Typically, you’d see this in Java as a thread is waiting on a synchronized block. You can expand these threads and see what’s going on and which monitor is held by each thread. If you’re able to reproduce a deadlock or a race in the debugger, they are both simple to fix. Stack traces are amazing in synchronous code, but what do we do when we have asynchronous callbacks? Here we have a standard async example from JetBrains that uses a list of tasks and just sends them to the executor to perform on a separate thread. Each task sleeps and prints a random number. Nothing to write home about. As far as demos go this is pretty trivial. Here’s where things get interesting. As you can see, there’s a line that separates the async stack from the current stack on the top. 


3 requirements for developing an effective cloud governance strategy

Governance is not a one-size-fits-all proposition, and each organization may prefer a different approach to governance depending on its objectives. Digital transformation is no longer a novel concept. But continuous innovation is required to improve and remain competitive, making automation critical for operational efficiency. According to IDC's Worldwide Artificial Intelligence and Automation 2023 Predictions, AI-driven features are expected to be embedded across business technology categories by 2026, with 60% of organizations actively utilizing such features to drive better outcomes. Automation is critical for increasing efficiency in cloud management operations, such as billing and cost transparency, right-sizing computer resources, and monitoring cost anomalies. The use of automated tools can improve security, lower administrative overhead, decrease rework, and lower operational costs. Definable metrics and key performance indicators (KPIs) can be used to assess outcomes with the right cost transparency tool. ... Automation can also aid in resolving personnel issues, which can cause migration projects to stall.


Styles of machine learning: Intro to neural networks

What makes the neural network powerful is its capacity to learn based on input. This happens by using a training data set with known results, comparing the predictions against it, then using that comparison to adjust the weights and biases in the neurons. ... A common approach is gradient descent, wherein each weight in the network is isolated via partial derivation. For example, according to a given weight, the equation is expanded via the chain rule and fine-tunings are made to each weight to move overall network loss lower. Each neuron and its weights are considered as a portion of the equation, stepping from the last neuron(s) backwards. You can think of gradient descent this way: the error function is the graph of the network's output, which we are trying to adjust so its overall shape (slope) lands as well as possible according to the data points. In doing gradient backpropagation, you stand at each neuron’s function and modify it slightly to move the whole graph a bit closer to the ideal solution. The idea here is that you consider the entire neural network and its loss function as a multivariate equation depending on the weights and biases.



Quote for the day:

"The secret of leadership is simple: Do what you believe in. Paint a picture of the future. Go there. People will follow." -- Seth Godin

Daily Tech Digest - January 20, 2023

Generative AI isn’t about what you think it is

ChatGPT and other generative artificial intelligence (AI) programs like DALL-E are often thought of as a way to get rid of workers, but that isn’t their real strength. What they really do well is improve on the work people turn out. There’s often a conflict between doing something fast and doing it well — a conflict generative AI could end by helping people become better and faster creators. And clearly, if these tools were presented more as assistants rather than as a replacement for people, the blowback we’ve seen (most recently in court) could be tamped down. ... We usually measure productivity as the amount of work done in a given time — without taking into account the quality of that work. Typically, the faster you do something, the lower the quality. Quality in and of itself is an interesting subject. I remember reading the book “Zen and the Art of Motorcycle Maintenance,” which uses storytelling to explain how quality is fluid and depends on the perception of the person observing it. For instance, what’s considered high quality in a sweat shop would be completely unacceptable in a Bentley factory.


Enterprises remain vulnerable through compromised API secrets

While many security teams assign specific entitlements to API keys, tokens, and certificates, the survey discovered that more than 42% do not. That means they’re granting all-or-nothing access to any users bearing these credentials, which although is the path of least resistance in access management, also increases the security risk. Corsha’s researchers also found that 50% of respondents have little-to-no visibility into the machines, devices, or services (i.e., clients) that leverage the API tokens, keys, or certificates that their organizations are provisioning. Limited visibility can lead to secrets that are forgotten, neglected, or left behind, making them prime targets for bad actors to exploit undetected by traditional security tools and best practices. Another red flag: although 54% of respondents rotate their secrets at least once a month, 25% admit that they can take as long as a year to rotate secrets. The long-lived, static nature of these bearer secrets make them prime targets for adversaries, much like the static nature of passwords to online accounts.


The essential check list for effective data democratization

In many cases, only IT has access to data and data intelligence tools in organizations that don’t practice data democratization. So in order to make data accessible to all, new tools and technologies are required. Of course, cost is a big consideration, says Orlandini, as well as deciding where to host the data, and having it available in a fiscally responsible way. An organization might also question if the data should be maintained on-premises due to security concerns in the public cloud. But Kevin Young, senior data and analytics consultant at consulting firm SPR, says organizations can first share data by creating a data lake like Amazon S3 or Google Cloud Storage. ... Most organizations don’t end up with data lakes, says Orlandini. “They have data swamps,” he says. But data lakes aren’t the only option for creating a centralized data repository. Another is through a data fabric, an architecture and set of data services that provide a unified view of an organization’s data, and enable integration from various sources on-premises, in the cloud and on edge devices. A data fabric allows datasets to be combined, without the need to make copies, and can make silos less likely.


Creating Great Psychologically Safe Teams

Conflict avoidance can be corrosive, even deadly, causing teams to miss opportunities and needlessly exposing them to risk. Members might recognize hazards but decline to bring them up, perhaps for fear of being seen as throwing a colleague under the bus… No matter how sensitive the issue or how serious the criticism, members must feel free to voice their thoughts openly—though always constructively—and respond to critical input with curiosity, recognizing that it is a crucial step toward a better solution. Mamoli pointed out that "there is a lot of misunderstanding around psychological safety," saying that "it doesn’t mean we’re super nice to each other and feel comfortable all the time." She explained that the resulting behaviour should be that teams "hold each other accountable" and can safely provide direct feedback saying "this is what I need from you. Or you are not doing this." She said that "this is what we need to remember psychological safety really means."


Big Tech Behind Bars? The UK's Online Safety Bill Explained

One major criticism of the Online Safety Bill is that it poses a threat to freedom of expression due to its potential for censoring legal content. Rights organizations strongly opposed the requirement for tech companies to crack down on content that was harmful but not illegal. An amendment in November 2022 removed mention of "lawful but harmful" content from the text, instead obliging tech companies to introduce more sophisticated filter systems to protect people from exposure to content that could be deemed harmful. Ofcom will ensure platforms are upholding their terms of service. Child safety groups opposed this amendment, claiming that it watered down the bill. But as the most vocal proponents of the bill, their priority remains ensuring that the legislation passes into law. Meanwhile, concerns over censorship continue. An amendment to the bill introduced this week would make sharing videos that showed migrants crossing the channel between France and the UK in "a positive light" illegal. Tech companies would be required to proactively prevent users from seeing this content.


Quantum Computing Owes Security Risk: Its Implications are Unfavorable

The foundation of quantum computing is quantum mechanics, which is fundamentally different from classical computing. Bits are used in traditional computing to process information, and they can only be in one of two states: 0 or 1. Quantum bits, or qubits, which can be in multiple states at once, are used in quantum computing to process data. This enables quantum computers to carry out some computations much more quickly than traditional computers. The potential for quantum computing to defeat many of the encryption algorithms currently in use to safeguard sensitive data is one of its most important implications. Although encryption algorithms are made to be hard to crack, they still depend on mathematical puzzles that can be solved by conventional computers fairly quickly. Due to the speed at which quantum computing can solve these issues, encryption can be broken much more quickly. The security of sensitive data, including financial information, personal information, and secrets of national security, is seriously impacted by this. 


Combatting the ongoing issue of cyberattacks to the education sector

The growing threat of cyberattacks has underscored that organisations can no longer depend on conventional perimeter-based defences to protect critical systems and data. New regulations and industry standards are aimed at shifting the cybersecurity paradigm – away from the old mantra of ‘trust but verify’ and instead towards a Zero Trust approach, whereby access to applications and data is denied by default. Threat prevention is achieved by only granting access to networks and workloads utilising policy informed by continuous, contextual, risk-based verification across users and their associated devices. There are many starting points on the path to Zero Trust. However, one driving principle to determine your priority of implementation should be the knowledge that the easiest way for cyberattackers to gain access to sensitive data is by compromising a user’s identity. ... Furthermore, post-mortem analysis has repeatedly found that compromised credentials are subsequently used to establish a beachhead on an end-user endpoint, which typically serve as the main point of access to an enterprise network. 


How A Company’s Philosophy To ‘Shift Left’ Is Making Headway In The Data Privacy World

Whether privacy sits within legal, security, or both it is less important than ensuring your privacy team is well-resourced and able to collaborate with the organization as a whole. Key to this collaboration is making sure you have the necessary legal and engineering staff to conduct privacy reviews and navigate a rapidly evolving regulatory landscape. Separately, you need to overcome the perception that privacy is an obstacle to productivity and get your product and growth teams to see privacy as a competitive advantage that allows them to build quickly and win consumer trust. Otherwise, pushback, low adoption, and apathy will prevent you from making any real progress. To unify product development with privacy standards, you have to make it impossibly easy for product teams to comply with privacy standards. That means bringing the privacy program directly into their process, right where they are already working, as well as giving them easy-to-understand guardrails that let them build quickly, without having to engage in a painful back and forth with the privacy lawyers and engineers conducting privacy reviews.


Managing Expectations in Low-Code/No-Code Strategies

To guarantee a LC/NC strategy is successful, organizations must ensure there is a bulletproof infrastructure, data governance and security system in place, as well as full visibility into their data and applications. “As a first step, enterprises must gain an understanding of their data -- what it is, where it is and what it’s worth,” Mohan says. “From there, IT leaders can understand where security and compliance vulnerabilities lay and then work to eliminate these threats while ensuring sufficient oversight for potential legal and contractual issues.” While the responsibility of developing a LC/NC strategy falls, initially, on an enterprise’s CTO or CIO, Mohan advises tech leadership should loop in experts in data security, data protection and governance to address cyber and compliance threats and ensure employees are following proper company and legal protocols. ... “Every level of leadership can decide to use a low-code/no-code strategy, ranging from an engineering team manager who is tasked with building products for the company, to a CTO setting the strategic direction of the organization's engineering efforts,” he explains.


Attackers Crafted Custom Malware for Fortinet Zero-Day

The BoldMove backdoor, written in C, comes in two flavors: a Windows version and a Linux version that the threat actor appears to have customized for FortiOS, Mandiant said. When executed, the Linux version of the malware first attempts to connect to a hardcoded command-and-control (C2) server. If successful, BoldMove collects information about the system on which it has landed and relays it to the C2. The C2 server then relays instructions to the malware that ends with the threat actor gaining full remote control of the affected FortiOS device. Ben Read, director of cyber-espionage analysis at Mandiant, says some of the core functions of the malware, such as its ability to download additional files or open a reverse shell, are fairly typical of this type of malware. But the customized Linux version of BoldMove also includes capabilities to manipulate specific features of FortOS. "The implementation of these features shows an in-depth knowledge of the functioning of Fortinet devices," Read says. "Also notable is that some of the Linux variants features appear to have been rewritten to run on lower-powered devices."



Quote for the day:

"It is the responsibility of leadership to provide opportunity, and the responsibility of individuals to contribute." -- William Pollard

Daily Tech Digest - January 19, 2023

Security risks of ChatGPT and other AI text generators

Yet ChatGPT is likely just the beginning of AI-powered cybercrime. Over the next five years, future iterations of AI will indeed change the game for cybersecurity attackers and defenders, argues a research paper entitled "The security threat of AI-powered cyberattacks" released in mid-December 2022 by Traficom, the Finnish government's transportation and communications agency. "Current rapid progress in AI research, coupled with the numerous new applications it enables, leads us to believe that AI techniques will soon be used to support more of the steps typically used during cyberattacks," says Traficom. "We predict that AI-enabled attacks will become more widespread among less skilled attackers in the next five years. As conventional cyberattacks will become obsolete, AI technologies, skills and tools will become more available and affordable, incentivizing attackers to make use of AI-enabled cyberattacks." The paper says while AI cannot help with all aspects of a cyberattack, it will boost attackers' "speed, scale, coverage and sophistication" by automating repetitive tasks.


Why Innovation Depends on Intellectual Honesty

Anxious teams score high on intellectual honesty and moderate to low on psychological safety. Team members are encouraged to be brutally honest because it’s better to be right, and win, than it is to be nice. To return to Steve Jobs: He famously described his approach as being designed to keep “the B players, the bozos, from larding the organization. Only the A players survive.”5 Just as famously, he cared little for creating social cohesion. Apple’s former chief design officer, Jony Ive, has described a conversation during which Jobs berated him for wanting to be liked by his team at the expense of being completely honest about the quality of their work. This example illustrates two types of conflict that emerge from intellectual honesty: task conflict and relationship conflict. Task conflict — disagreement about the work — can be highly productive for innovation and team performance. But relationship conflict, which arises when the way someone says or does something makes people feel rejected, is detrimental. Here’s why. On teams that have an anxious culture, people are willing to push one another to learn through disagreement. 


8 ‘future of work’ mistakes IT leaders must avoid

Virtual reality is one technology that could have an impact on the future of work, and some IT leaders are considering the benefits. Oculus headsets from Meta, for example, are being rolled out on a trial basis at the University of Phoenix, which has made the decision to go fully remote. This was a big mindset change for Smith, who felt pre-pandemic that “face-to-face collaboration was better and high fidelity for creativity purposes,’’ he says. “Then, when everything shifted to full-time remote, it went against my core beliefs, so personally, I had to lean in.” Smith has come to realize that staying remote has not affected IT’s ability to collaborate and teams have been able to remain productive and launch “complex new products into the marketplace.” He says that working remotely has increased his ability to access tech talent outside of the Phoenix area. But when people were working in a hybrid model early on, there would be multiple conversations going on, and “people on the remote end were getting the short end of the stick” because they “couldn’t get a word in edgewise,’’ Smith recalls.


Top intelligent automation trends to watch in 2023

Automation technology will be key in automating previously inflexible processes whilst providing intelligent data led nudges that help agents work efficiently in a complex operating environment. This means that companies can offer an unprecedented level of flexibility and support to their staff, while making significant improvements to engagement and wellbeing. By improving engagement between employees and employers – and fostering a culture of support and encouragement – everyone benefits. ... Since machine learning (ML) rose to significance a decade or so ago, it has rapidly transformed nearly every industry. Businesses would be wise to sharpen their skills and learn what ML has to offer. Whilst technologies in the past only processed static, historical data, ML provides a real-time capability that transforms the gap. It can help organisations become better at predicting flows and responding to them proactively rather than reactively. The potential improvement to areas such as customer service is enormous. Solutions can leverage “productionising” ML models – by which a model is transformed to a scalable, observable, mission critical, production-ready software solution – at their core.


What kind of future will AI bring enterprise IT?

The incremental approach turns out to be the smartest way to build with AI/ML. As AWS Serverless Hero Ben Kehoe argues, “When people imagine integrating AI … into software development (or any other process), they tend to be overly optimistic.” A key failing, he stresses, is belief in AI/ML’s potential to think without a commensurate ability to fully trust its results: “A lot of the AI takes I see assert that AI will be able to assume the entire responsibility for a given task for a person, and implicitly assume that the person’s accountability for the task will just sort of … evaporate?” In the real world, developers (or others) have to take responsibility for outcomes. If you’re using GitHub Copilot, for example, you’re still responsible for the code, no matter how it was written. If the code ends up buggy, it won’t work to blame the AI. The person with the paystub will bear the blame, and if they can’t verify how they arrived at a result, well, they’re likely to scrap the AI model before they’ll give up their job. This is not to say that AI and ML don’t have a place in software development or other areas of the enterprise.


How CISOs can manage the cybersecurity of high-level executives

The risk faced by executives has grown rapidly as the pandemic-driven rise of hybrid work increased the blurring of professional and personal digital lives. Complex geopolitical tensions, opportunities for digital activism against corporates—particularly in industries with higher risk profiles—and the prospect of financial gain from targeting wealthy leaders have all raised the stakes on the personal digital lives of executives. A large organization, especially if it's a publicly listed company with a C-suite leadership team that has a presence in the media and on social media can be a lightning rod for the attention of bad actors, says Gergana Winzer, partner of cyber services with KPMG Australia. “Some of these small-time criminals have awakened to the reality of being able to make monetary returns by utilizing easy-to-buy malware or ransomware online and just deploying it across those types of high-net-worth individuals,” Winzer says. This class of personal risks can take many different forms, according to Pierson, who says one of the biggest risks is to intellectual property—the loss of corporate documents from executives’ personal devices or personal accounts where there are fewer or no controls.


Taking the Reins on IT Interoperability

Interoperability can be elusive because many organizations embark on tactical changes or fail to see the complete picture, Barnett says. “In many cases, they focus only on a part of the organization without fully understanding the impact on technology investments, process reengineering, and human capital assets,” he explains. The intersection of operational technology (OT) and information technology (IT) can prove particularly nettlesome. Historically, these two entities have operated separately, with attempts to connect systems and data an afterthought. “This often leads to the creation of data silos … that hinder agility, reduce productivity, impede customer experience improvements, and hamper scalability,” Barnett says. Business and IT leaders who ignore these problems do so at their own peril. Accenture found that 66% of organizations struggle with the sheer number of applications. This results in technical debt and a loss of agility, McKillips says. In addition, 60% are unable to align their application strategy with overall business goals and 44% struggle to identity the right business case or ROI. Remarkably, 34% believe interoperability is simply too expensive.


ICS Confronted by Attackers Armed With New Motives, Tactics, and Malware

The report identified top trends in the ICS threat landscape based on a compilation of information from various sources including open source media, CISA ICS-CERT advisories, and Nozomi Networks telemetry, as well as on exclusive IoT honeypots that Nozomi researchers employ for "a deeper insight into how adversaries are targeting OT and IoT, furthering the understanding of malicious botnets that attempt to access these systems," Gordon says. What researchers observed over the last six months was a significant uptick in attacks that caused disruption to a number of industries, with transportation and healthcare being among the top new sectors finding themselves in the crosshairs of adversaries among more traditional targets. Attackers are using various methods of initial entry to ICS networks, although some common weak security links that have historically plagued not just ICS but the entire enterprise IT sector — weak/cleartext passwords and weak encryption — continue to be the top access threats. Still, “Root” and “admin” credentials are most often used as a way for threat actors to gain initial access and escalate privileges once in the network, the findings show.


Cybersecurity CTO: A day in the life

Given the scope of the job, a CTO is rarely going to have a consistent daily schedule. Instead, goals and cadences are established weekly. That being said, I do go into the office every day. My typical workday begins at 9:30 a.m., and I take an electric scooter to get into the office. Our headquarters are located in Tel Aviv, so the weather is almost always perfect for the scooter. On a weekly basis, I hold one-on-one meetings with specific managers to understand team needs, review KPIs to ensure they’re being met, and review our proof-of-concept (POC) projects to ensure our customers and potential customers are advancing. These POC reviews are where we often catch technical issues, allowing us to fix them before they cause problems for our customers. While I’m responsible for several employees within our R&D department, I do my best to distance myself and empower our VP of R&D to manage the team. The goal is quality – not getting bogged down in how or when people work. I usually wrap up my time in the office around 6:30 or 7:00 p.m. 


Proven Solutions to Five Test Automation Issues

When you run your automated tests, you need the dependent systems to support your test scenarios. That includes setting up the API and service responses to match what is needed for your test cases. Setting up test data in backends might be problematic, as they might not be within your team’s control. Relying on another team to set up the test data for you means you may end up with incorrect or missing test data and, therefore, cannot continue working on or running your automated tests. Another issue is that even if you have the test data, running your automated tests frequently in the build pipeline might use it all up (test data burning). Then you need a test data refresh, which might take even longer than the partial test data setup, and you are blocked again. Even if you have all the test data you need, when you (or some other team) run their automated or manual tests against the same services, the test data might change (for example, the balance on account or list of purchased items by a user). The tests break again because of test data issues rather than actual issues with the product.



Quote for the day:

"People don't resist change. They resist being changed." -- Peter M. Senge