Daily Tech Digest - April 19, 2023

Why Your Current Job May Be Holding Back Your IT Career

Failing to pursue professional development opportunities and not maintaining a current and relevant skillset are both great ways to shift a career into neutral. “This includes not keeping up with the latest industry trends and technologies, not networking with other professionals, and not pursuing additional training or education opportunities,” Delfine says. “IT professionals need to continually develop their skillsets and be aware of and learn new methods and tools that can be applied across multiple industries.” Another mistake is spending too little or too much time in a particular role. Knowing when to stay and when to move on is a skill within itself, says Erin Goheen, vice president of technology at freight and logistics services firm XPO. “I've seen cases where job-hopping can be detrimental to one's career because it prohibits technologists from maximizing the amount of learning and skill development gained in a particular role,” she explains. “Conversely, if you’re in a role for too long and you're no longer learning and expanding your professional capabilities, other professionals who are actively growing in similar roles will pass you in their career trajectories.”


Top risks and best practices for securely offboarding employees

Shadow IT and information systems that aren’t part of a business’s identity and access management (IAM) architecture are a huge risk to successful, secure offboarding, says Richard Jones, global CISO at Orange Cyberdefense. This is magnified for cloud and SaaS systems/applications that don’t require specific network access or physical presence in an office, with IT teams often unaware of the extent of employees’ SaaS usage. ... Another challenge is managing software asset licenses. If employees aren’t properly offboarded from cloud system licenses this can lead to excessive IT costs as well as security risks, as licenses are often changed per user, per month, Jones says. It’s not just the risks of outgoing employees themselves that CISOs need to consider. “In most cases, mass layoffs cause remaining employees to be concerned about their job security, which can increase insider threats and introduce security gaps caused by unintentional negligence,” says Mohan Koo, CTO at DTEX Systems.


How Cybersecurity Leaders Can Capitalize on Cloud and Data Governance Synergy

In today’s modern organizations, explosive amounts of digital information are being used to drive business decisions and activities. However, both organizations and individuals may not have the necessary tools and resources to effectively carry out data governance at a large scale. I’ve experienced this scenario in both large private and public sector organizations: trying to wrangle data in complex environments with multiple stakeholders, systems, and settings. It often leads to incomplete inventories of systems and their data, along with who has access to it and why. Cloud-native services, automation, and innovation enable organizations to address these challenges as part of their broader data governance strategies and under the auspices of cloud governance and security. Many IaaS hyperscale cloud service providers offer native services to enable activities such as data loss protection (DLP). For example, AWS Macie automates the discovery of sensitive data, provides cost-efficient visibility, and helps mitigate the threats of unauthorized data access and exfiltration.


Seven Tips for Achieving Dynamic Professional Transformation with Framework Modeling

Framework modeling can be a significant differentiator and can empower professionals with rich knowledge repositories of best practices derived from frameworks. The modeling of the framework offers a big-picture approach and life cycle perspective for achieving goals. This can aid professionals as existing and emerging technologies impact which professional skills are relevant and required in the market. Innovative technologies continue to emerge and create an impact on employment due to new services made possible through innovation and automation. For example, there is much speculation about how ChatGPT will impact employment opportunities in various lines of work. There is also widespread concern that management will prefer to harness technology rather than employees when considering value delivery in the future. Hence, professionals as knowledge workers can benefit by upgrading their skills by adapting the framework modeling approach. ...  Framework modeling can be considered the skill of carving the required knowledge from the structure and contents of a framework per an enterprise’s needs.


FBI and FCC warn about “Juicejacking” – but just how useful is their advice?

The idea is simple: people on the road, especially at airports, where their own phone charger is either squashed away deep in their carry-on luggage and too troublesome to extract, or packed into the cargo hold of a plane where it can’t be accessed, often get struck by charge anxiety. Phone charge anxiety, which first became a thing in the 1990s and 2000s, is the equivalent of electric vehicle range anxiety today, where you can’t resist plugging in for a bit more juice right now, even if you’ve only got a few minutes to spare, in case you hit a snag later on in your journey. But phones charge over USB cables, which are specifically designed so they can carry both power and data. So, if you plug your phone into a USB outlet that’s provided by someone else, how can you be sure that it’s only providing charging power, and not secretly trying to negotiate a data connection with your device at the same time? What’s if there’s a computer at the other end that’s not only supplying 5 volts DC, but also sneakily trying to interact with your phone behind your back?


7 keys to controlling serverless cloud costs

Overprovisioning memory and CPU allocation are two culprits often found behind serverless computing cost overruns. When you execute a serverless function in your cloud application, your CSP allocates resources according to the function’s configuration. Then when billing time comes around, your CSP bases your billing on the amount of resources your application consumes. It makes good business sense to spend the extra time during the design phase to determine the appropriate amount of resources that each serverless function requires, so you’re minimizing costs. Train your cloud developers to use compute only when necessary, advises CloudZero. They give the example of using step functions to call APIs instead of Lambda functions, meaning you only pay for the step functions. The major CSPs and cloud management platforms include key performance indicator (KPI) monitoring dashboards of one form or another. You can also use observability tools, such as Datadog, for KPI monitoring. Monitoring your serverless KPIs should figure prominently in your project and deployment plans.


New DDoS attacks on Israel’s enterprises, infrastructure should be a wake-up call

“Generally speaking, all these attacks happen with more or less sophisticated forms, either abusing different vulnerabilities and systems or brute force DDoS,” Izrael said. “What’s different about these is that an unsophisticated DDoS tactic would be to blast a website with traffic and take it down. What’s happening here is that attackers have been targeting a lot of weak spots where they are taking down services.” Izrael added that the attackers have also managed to hobble, albeit briefly, smart IoT functionality at individual homes, buildings and other structures. Justin Cappos, professor of computer science and engineering at the NYU Tandon School of Engineering, said network provisioning operators need to pay attention to any new group launching large-scale DDoS attacks. ... Izrael said the combination of direct attacks by the Iranian government and indirect attacks by affiliated groups achieves two goals: keeping the provenance of the attacks very murky and making the attack seem bigger because the origin of the attacks is unclear. 


Rising to the challenge: the role of boards in effective bank governance

Effective governance has been a priority of our supervision for several years, and will continue to be in the years to come. As part of our work on this priority, we are carrying out an update of our supervisory expectations on governance. Today’s seminar is an important opportunity to listen to the industry as we fine-tune those expectations, and marks one of many milestones along the way. Particularly in the current climate, it is essential for banks to have strong and effective governance. A bank needs a board that can steer it through calm and stormy waters alike, setting the compass on the strategy for the bank, while ensuring a sustainable business model and monitoring risks in a forward-looking manner. In today’s environment, backward-looking indicators of risk might be misleading. It is therefore more important than ever for boards to be vigilant. Boards need to take a proactive approach to identifying emerging risks and trends, assessing potential impacts on the bank, and taking appropriate actions to mitigate them.


Unlocking the power of a multigenerational workforce

Those organisations that don’t innovate die a slow death; those who are not open to change and not forward-looking will not be far behind. Organisations have to constantly employ different ‘listening methods’ to gauge the pulse of employees across generations, check on new trends and keep revisiting their programs and policies to imbibe what’s new, instead of sticking to the ‘tried and tested’. ... Learning only happens when one’s thoughts and opinions are challenged by those people from entirely different backgrounds or have a very different thought process from that of one’s own. The influx of talent from diverse groups, especially from across generations hence continues being very essential for the organisation. The early-age talent brings enthusiasm and challenge; the older age group folks infuse much-needed wisdom and experience! Sensitising managers and leaders: Since they hold the staff for taking the organisation ahead, especially in turbulent times. ‘How to lead a team with members across generations’ is a learning module that organisations must learn to invest in – incorporating elements like empathy, situational leadership and leaving one’s ego behind.


CIO Fletcher Previn on designing the future of work

The network that can properly support hybrid work needs to be more distributed, porous and has a very different attack surface than when we were all in the office. Technologies like Zero Trust become even more important, along with split tunnel VPNs and having the right endpoint security strategy so you don’t have to backhaul all the traffic in order to inspect it. You need carrier and path diversity at your carrier neutral facilities and network points of presence, and you want to have a good peering strategy so you can bring applications closer to the end users and take traffic off the public internet. Full-stack observability becomes more urgent in a hybrid world. How do we really understand our employee experience our employees are having when they are connecting from across all sorts of networks that we don’t manage? We need to understand the performance of the public internet and various SaaS tools in order to really know what our hybrid work experience is going to be for our people. We also need tools that provide valuable observability that lets us detect and fix problems before our employees even know there is an issue brewing.



Quote for the day:

"Leadership should be born out of the understanding of the needs of those who would be affected by it. " -- Marian Anderson

Daily Tech Digest - April 17, 2023

The Power Of Silence: 10 Reasons Silent People Are Successful

Being silent often goes hand-in-hand with improved observation. When you’re not focused on expressing your thoughts, you have the more mental bandwidth to take in your surroundings. This heightened awareness allows you to understand people and situations better. Many successful individuals credit their observation skills as contributing to their achievements. By carefully observing their environment, they can identify opportunities and threats others might overlook. A quiet mind leads to better focus and concentration. When you’re silent, it’s easier to direct your attention to the task, free from distractions or competing thoughts. This improved focus can enhance your decision-making abilities and boost your overall productivity. ... Silence can be a powerful tool for emotional regulation. Silent individuals often excel at managing their emotions, avoiding impulsive actions, and maintaining composure in challenging situations. Staying calm under pressure can lead to better decision-making and increased resilience. 


7 cybersecurity mindsets that undermine practitioners and how to avoid them

Security is often seen as a standalone function or additional product that is bolted onto the real infrastructure or as a discrete thing to be finalized and delivered. This is a long-standing view in software development, something similar to the way we once thought about quality: as a distinct, separate component of things. “Quality is not an act, it’s a habit,” according to an elegant paraphrase of Aristotle. Just like quality, security is not a finished product but rather an ongoing discipline. When we see security as a practice, to be continually refined and honed, it frees up the energy to engage it as such. We grow healthier by exercising regularly and monitoring our diet daily; such is security. If we want to get good at guitar or a martial art, we must keep coming back to it and refining it, but there is always more to develop — just as in security. Instead of bemoaning this fact, we can lean into it and use it to fuel our efforts. It’s actually a blessing to work in a field that always has room for growth and can fully engage our capabilities. 


A distributed database load-balancing architecture with ShardingSphere

The key point of ShardingSphere-Proxy cluster load balancing is that the database protocol itself is designed to be stateful (connection authentication status, transaction status, Prepared Statement, and so on). If the load balancing on top of the ShardingSphere-Proxy cannot understand the database protocol, your only option is to select a four-tier load balancing proxy ShardingSphere-Proxy cluster. In this case, a specific proxy instance maintains the state of the database connection between the client and ShardingSphere-Proxy. Because the proxy instance maintains the connection state, four-tier load balancing can only achieve connection-level load balancing. Multiple requests for the same database connection cannot be polled to multiple proxy instances. Request-level load balancing is not possible. ... Theoretically, there is no functional difference between a client connecting directly to a single ShardingSphere-Proxy or a ShardingSphere-Proxy cluster through a load-balancing portal. However, there are some differences in the technical implementation and configuration of the different load balancers.


How Synthetic Data Can Help Train AI and Maintain Privacy

Common use cases for synthetic data include software engineering when new features are built but no production data is available, says Jim Scheibmeir, senior director analyst with Gartner. For instance, if software is tested for an autonomous vehicle, and it needs new information about the weather or obstructions in the road, he says. Different scenarios can be generated to test that autonomous algorithm to prepare it. Data scientists who are trying to create new algorithms, Scheibmeir says, or need to prove out new hypotheses might struggle to get their hands on production data. That limited availability might have to do with restricted access, compliance, or regulation, making synthetic data attractive. The rise of generative AI might also play a role in synthetic data generation. “Certainly, ChatGPT is going to reinvigorate our imagination of what generative can do for us,” Scheibmeir says. “Gartner urges organizations to look at proper test data management, including synthetic data generation, for a few different reasons.” 


What business executives don’t understand about IT

When the CEO doesn’t think IT is important enough to get top-level attention, that message filters down to the rest of the corporation. IT is not viewed to be as important as Sales, Finance, Manufacturing, Operations, or Marketing — dangerous in a highly competitive environment where efficient or innovative systems can spell the difference between the corporation’s success or failure. ... Systems development is another important area executives need to understand. The systems IT develops will not be used by IT; rather, they will become integral to the requesting department. It is important, therefore, that management understand the processes involved in proposing the system, estimating the cost, determining the ROI, producing the deliverables, changing the specifications and time frames, and measuring the system effectiveness. After all, the completed system may impact sales projections, departmental costs, and individual incentives, to name a few. Management must also assure that the people in the user organization are given the time and recognition to do the work required to develop the precise specifications of the system. 


Tech companies including Adobe are taking a new look at a big industry debt issue

Despite the drag of technical debt that the data suggests, some industry executives say it gets a bad reputation. “If you’re tech-debt-free, you’re not innovating,” said Frans Xavier, CTO of low-code/no-code security automation platform Swimlane. In this sense, technical debt is a signal of iteration. In fact, in a recent report from consumer electronics company TE Connectivity, 55% of the engineers surveyed said it’s iteration — not total transformation — that represents innovation at its core. Adobe head of strategic development for creative cloud partnerships Chris Duffey is looking to reshape technical debt. “I would offer to reframe technical debt as the value of insight gathering throughout the innovation creation process,” Duffey said. The “fail fast” dogma that propels much of the technology industry (when not taken literally) references experimentation, insight gathering, and optimization, he added. This can be hard to see when you look solely at the data, in part because it’s difficult to quantify the process of innovation. 


Moving beyond DEI: Fostering belongingness in the workplace

Measuring belongingness is different than simply measuring diversity and inclusion. Diversity and inclusion are behaviours, meaning they can be mostly measured through policy and procedures. On the other hand, belongingness is an emotional response that covers an array of factors such as an individual’s trust, comfortability, and openness towards the company. Therefore, belongingness happens when the employee is ‘valued’, and value here means not only are they acknowledged and appreciated for their work, but they also understand how their work contributes to the company’s vision, mission, key priorities, and growth. It also means that ‘they matter’ – being part of the teams, on ‘top of mind’ for leading and driving the initiatives, being ‘trusted’ and ‘cared for’, and that is the ultimate cement that joins them to the culture of the company. Belongingness is in these little things that define “moments that matter” – let's explain that in greater detail with questions that come to an employee’s mind when they experience an organization.


Enhance data governance with distributed data stewardship

Data stewards are a central point of contact. They enforce accountability of the data lifecycle, and oversee data governance and visibility. In many instances, data stewardship is a centralized business or IT function. These settings require enterprise data governance or expertise in data management and governance execution. Distributed data stewardship is a model or framework that allows teams closest to the data to manage access and permissions. Data management is decentralized and resides within the business unit. ... The core component of a distributed data stewardship program is similar to a data stewardship one. The success of such a model depends on how well a decentralized IT, governance and distributed access management model works. Because a distributed data stewardship model delegates data management responsibilities throughout the enterprise, the fundamental difference between a data stewardship model and a distributed data stewardship model is in shifting an organization toward decentralizing data access. This requires time, effort, cadence and key stakeholders who agree and adhere to such a framework.


Cognitive flexibility: the science of how to be successful in business and at work

Cognitive flexibility aids learning under uncertainty and to negotiating complex situations. This is not merely about changing your decisions. Higher cognitive flexibility involves rapidly realising when a strategy is failing and changing strategies. The importance of cognitive flexibility was first discovered in clinical patients. The function engages areas of the brain involved with decision making, including the prefrontal cortex and striatal circuitry. When this circuitry becomes dysfunctional due to neurological diseases or psychiatric disorders, it can cause rigidity of thought and a failure to adapt. Cognitive flexibility is required in many real-world situations. The category of workers that requires the highest level of adaptability is arguably entrepreneurs. Entrepreneurs need to show flexibility not only in terms of idea generation, but also for resource allocation and social exchanges. Indeed, our previous research has shown that entrepreneurs, compared with high-level managers, have increased cognitive flexibility. This ultimately helps them to solve problems and make risky decisions successfully.


IT leadership: Mission-driven IT and finding your "why"

People talk about IT strategy or tech strategy or product strategy; they talk about deliverables, roadmaps, all of that stuff. To me, it all starts with the mission, and our mission is to transform lives by unlocking better evidence. And really what that means day to day is helping facilitate and support and enable the clinical trial process, which we know in recent years especially has—the importance of which is really second to none. It’s accelerated during the pandemic, naturally, as we look for treatments and preventatives for Covid. But now, what it’s done is it’s poured gas on the fire in a whole bunch of other areas, too. So the industry is working faster than ever, and I like to think we’re doing life-changing work. I believe we are. And the technology that we build at Clario and the expertise that we bring helps support the companies that are running clinical trials, the sponsors, the people who are running trials day to day, the sponsor—or the trial teams, as well as the sites. You know, the folks, the nurses, the clinicians, the physicians who are all part of this process and helping facilitate this.



Quote for the day:

"Increasingly, management_s role is not to organize work, but to direct passion and purpose." -- Greg Satell

Daily Tech Digest - April 15, 2023

6 best practices to develop a corporate use policy for generative AI

The first step to craft your corporate use policy is to consider the scope. For example, will this cover all forms of AI or just generative AI? Focusing on generative AI may be a useful approach since it addresses large language models (LLMs), including ChatGPT, without having to boil the ocean across the AI universe. ... Involve all relevant stakeholders across your organization – This may include HR, legal, sales, marketing, business development, operations, and IT. Each group may see different use cases and different ramifications of how the content may be used or mis-used. Involving IT and innovation groups can help show that the policy isn’t just a clamp-down from a risk management perspective, but a balanced set of recommendations that seek to maximize productive use and business benefit while at the same time manage business risk. Consider how generative AI is used now and may be used in the future – Working with all stakeholders, itemize all your internal and external use cases that are being applied today, and those envisioned for the future.


There Is No Resilience without Chaos

Chaos engineering has emerged as an increasingly essential process to maintain reliability for applications — or in not only cloud native but any IT environment. Unlike pre-production testing, chaos engineering involves determining when and how software might break in production by testing it in a non-production scenario. In this way, chaos engineering becomes an essential way to prevent outages long before they happen. ... Chaos engineering, when done properly, requires observability. Problems and issues that can cause outages and the greater performance can be detected well ahead of time as bugs, poor performance, security vulnerabilities, etc. become manifest during a proper chaos engineering experiment. Once these bugs and kinks that can potentially lead to outages if left unheeded are detected and resolved, true continued resiliency in DevOps can be achieved. In the event of a failure, the SRE or operations person seeking the source of error is often overloaded with information.


Data Governance: Simple and Practical

Purpose-driven data governance programs narrow their focus to deliver urgent business needs and defer much of the rest, with a couple caveats. First, data governance programs are doomed to fail without senior executive buy-in and continuous engagement of key stakeholders. Without them, no purpose can be fulfilled. Second, data governance programs must identify and gain commitment from relevant (but perhaps not all) data owners and stewards, but that doesn’t necessarily mean roles and responsibilities need to be fully fleshed out right away. Identify the primary purpose then focus on it – sounds like a simple formula, but it’s not obvious. Many data governance leaders are quick to define and pursue their three practices or five pillars or seven elements, and why shouldn’t they? They need those capabilities, but wanting it all comes at the sacrifice of getting it now. Generate business value with your primary purpose before expanding. ... An insurer explained to me their dashboards weren’t always refreshed, and when they were, wide fluctuations in values made it impossible to make informed decisions.


What is platform engineering? Evolving devops

The developer portal is the main mechanism and expression of platform engineering. Its main purpose is to gather together the developer's tooling, documentation, and interactivity in one place. It is a kind of front end to the organization's developer infrastructure. Developer portals (aka internal developer platforms) have evolved out of several needs and trends. This primer on developer portals delineates these tools into three types: universal service catalog, API catalog tied to API gateway, and microservices catalog. APIs figure large in platform engineering because the uptake of microservices architecture has caused a great deal of increased complexity for modern software teams. Orchestrating microservices in a large organization can be very challenging. Just understanding what microservices are involved in a given use case can be difficult. A developer portal offers a unified view into the overall web of microservices. Another aspect of the developer portal is offering a standard framework to combine the tools used by the organization.


EU privacy regulators to create task force to investigate ChatGPT

In a statement posted on its website, the EDPB said the task force was intended to “foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities.” Last month, Italy’s data privacy regulator issued a temporary ban against ChatGPT over alleged privacy violations relating to the chatbot’s collection and storage of personal data. Italy's guarantor for the protection of personal data ordered the temporary halt on the processing of Italian users’ data by ChatGPT’s parent firm OpenAI, unless it complied with EU privacy laws. In order to have the service reinstated, the Italian guarantor outlined a list of data protection requirements that OpenAI must comply with, including increased transparency into how ChatGPT processes data, the right for nonusers to opt out of having their data processed, and an age-gating system for signing up to the service. In the wake of the ban, OpenAI CEO Sam Altman tweeted: “We of course defer to the Italian government and have ceased offering ChatGPT in Italy (though we think we are following all privacy laws).”


The mechanics of entrepreneurship

Lidow codifies this innovative shove, arguing that entrepreneurs invent and create enduring change in one of three ways: by scaling supply, scaling demand, or scaling simplicity. The first category includes those who scaled up their supply by devising an efficient system and then repeating it. In the late 1700s, the enterprising coin-maker Matthew Boulton, for example, leveraged his superior knowledge of metalworking to create a new process for producing coins quickly and uniformly—spawning countless societal changes. This included the swarm of entrepreneurs in the early- to mid-1800s who conceived the modern railway. Titans of the second category, scaled demand, include cultivators of desire like Wedgwood, Selfridge, and the American PR pioneer Edward Bernays, who coined the phrase “public relations” and created the industry. Through carefully cultivated propaganda campaigns, Bernays convinced wide swaths of folks in the US to support the country’s efforts in World War I and, later, stimulated broad demand for products such as bacon and tobacco.


3 IT leadership mistakes to avoid

The first exercise we undertook was brainstorming and agreeing on a set of operating principles, such as all ideas would be respected regardless of which side they came from; facts and data—not emotion—would drive decision-making; and creating a positive client experience would be our collective North Star. These principles became our rallying cry and helped lead the team to a very successful client conversion. Contrast that with leaders who set rigid rules for their teams to follow. Leading by a set of hard rules will limit innovation, hinder individual and team development, and create a constant need to add or modify the rules as situations change. ... There is no such thing as a perfect organizational structure—there’s only an array of alternatives, each with its own respective strengths and weaknesses. The only way to make an inherently flawed organizational structure work is to have individuals collaborate under a common strategy, purpose, and shared goals. Great teams also take individuals who are willing to sacrifice for the good of the whole.


Data leader Tejasvi Addagada on the value of data governance

If data is siloed, it cannot be used for developing insights and products. For an organization that is yet to invest in managing its data and thinks centralization is costly or a bottleneck, a data mesh architecture is a decentralized approach at its core, with its domain team ingesting its operational and analytical data and developing data products. ... From the initial concept of corporate governance, IT governance has evolved into the recent concept of data governance. Globally, the adoption of cloud services, the evolution of modern data stacks, and improved data literacy have led to a greater interest in governing data over the past years. Implementing data governance is necessary to get sustainable value from data. A subfunction can be formalized as an authorized provisioning service. It can support activities that help ensure that a data element can be rightfully sourced from a designated provisioning point. In addition, it can have the domain team express their trust in certifying data as a system of record as well as authorized to provision.


Google Cloud Unveils AI Tools to Streamline Preauthorizations

“The Claims Acceleration Suite’s Claims Data Activator uses Document AI, Healthcare Natural Language API, and Healthcare API to convert this unstructured data to structured data and establish data interoperability,” Waldron says. “This speeds up the process, and significantly reduces administrative burdens and costs, enabling experts to make faster, more informed decisions that improve patient care.” A quick prior authorization process is essential to speeding up the process for a patient who may need approval for transportation to an important medical procedure such as a colonoscopy, according to Waldron. Patients also seek prior authorizations to use a digital device as part of weight management or a care management plan for conditions such as diabetes. A goal of Google’s Claims Data Activator is to make healthcare prior authorization data more interoperable, or accessible for all parties. 


Data sharing between public and private is the answer to cybersecurity

Businesses and governments are already interlinked in their attempts to keep ahead of cybercriminals. You only need to look at examples of the recent Royal Mail attack, which saw the NCSC and the business working together to reduce its impact. And across the Pond, Biden’s newly announced Cybersecurity Strategy will focus on ensuring closer collaboration on cyber between government and industry. Whilst all of this is moving in the right direction in this regard, there’s more work to be done to create more intentional and systematic cross-sharing and learning from one another. To kickstart the open flow of knowledge in the industry, both public and private organizations could sponsor a wider peer network for security experts that streamlines intelligence from private to public or vice versa and offers support. Gartner offers a Peer Connect network of business leaders that encourages the open discussion of trends and ideas, critical to business decision-making. 



Quote for the day:

"A leader's dynamic does not come from special powers. It comes from a strong belief in a purpose and a willingness to express that conviction." -- Kouzes & Posner

Daily Tech Digest - April 07, 2023

Why leadership training fails — and how to fix it

It could be attributed to the existing culture of the leadership team. Do the organization’s leaders possess a growth mindset, or is intellectual humility lacking? Unless a leader wants to improve, it’s unlikely that they will. Leaders must be motivated to make the time and have the patience for the kind of reflective practice that makes learning stick. What about the group of high-potential employees, who have every intention of applying what they learn, but then struggle to translate the skills and knowledge into practice? It’s possible that the way the program is designed could be hindering successful learning transfer. One size leadership training does not fit all. To be effective, it must be designed with the learners’ needs in mind, whether they’re high-performing individual contributors without supervisory experience, or C-suite executives. If participants don’t find the content relevant to their role and objectives, learner engagement will suffer. Lastly, “leadership training” cannot be presented as a one-off event at the organization but rather, an ongoing process. Formal training is only one aspect of learning.


Delivery Leadership is both an Art and a Science

In the present time, with businesses becoming increasingly interconnected and globalized, enterprises worldwide seek modern technology products that are straightforward yet impactful in enhancing their operational efficiency, productivity, market penetration, and reducing operational expenses. These business requirements prompt organizations to explore technologies such as Cloud computing, ERP, AI, Data Analytics, Automation, and Business Intelligence. The provision of intricate IT Solutions and Services demands expertise and attributes from both the Art and Science aspects of the field. ... Delivery Leaders must have exposure to Industry, domain, and business knowledge to be successful in their role. They must place themselves in the shoes of customers and think about what value-add services their customers perceive to be important for their businesses. Proactive approach and futuristic thinking are the two most important skills a delivery leader must possess. They must also encourage a culture in which sharing prescriptive approaches and making business recommendations become the new norm. 


Asynchronous Patterns for Microservice Communication

Since Microservices communicate using Asynchronous methods, keeping their patterns fast and responsive is essential. Fortunately, there are several quick ways to do this. For example, having your services communicate asynchronously with RabbitMQ or Kestrel is a good idea before Synchronous methods. This way, you can maximize network efficiency while minimizing response delays. You can also use Kestrel’s retries for excellent reliability and scalability when communicating between machines. In addition, it’s a good idea to use event-driven communication for better responsiveness between your components and clients. If you want to connect with multiple microservices without creating dependencies or tightly coupling them, consider using asynchronous message-based communication in your microservices architecture. This approach leverages events to facilitate communication between microservices, which is commonly referred to as event-driven communication.


GPT and the Future of High-Performance Computing and Big Data Analytics

The emergence of Generative Pre-trained Transformer (GPT) models has revolutionized the field of high-performance computing and big data analytics. GPT models are capable of learning from large datasets and producing highly accurate results with minimal effort. This has enabled organizations to quickly analyze large datasets and extract meaningful insights. GPT models have been successfully used in a variety of applications, such as natural language processing, image recognition, and machine translation. With the increasing availability of large datasets, GPT models are expected to become even more powerful and efficient. This will enable organizations to gain deeper insights into their data and make better decisions. In addition, GPT models can be used to speed up the development of high-performance computing systems. GPT models can be used to optimize the hardware and software components of these systems, allowing them to run faster and more efficiently. 


Essential Soft Skills for Testers: Unlocking Success in Your Testing Career

Collaboration skills are essential for testers, as they often work closely with various team members, including developers, product managers, and other stakeholders, to ensure the delivery of high-quality products. In this section, we will explore three crucial collaboration skills that enable testers to be effective team players: active participation, cross-functional cooperation, and providing and receiving constructive feedback. Active participation refers to engaging fully in team activities, sharing ideas, and contributing meaningfully to discussions and decisions. ,,. Cross-functional cooperation is the ability to work effectively with team members from different departments or areas of expertise. Testers who excel in cross-functional cooperation can effectively communicate with developers, designers, product managers, and others to identify and resolve issues, share knowledge, and promote a shared understanding of project goals. This skill is particularly important for testers in agile environments, where cross-functional teams are the norm and effective cooperation is critical for delivering high-quality products on time.


What Engineers Need to Know About Using Agile for Digital Transformation

As Agile software development techniques became more widely applied, so the pace of technological change continued to quicken. From the emergence of the cloud to the increase in mobility, and onto the rise of data analytics and artificial intelligence, businesses in every sector began using IT systems to power internal processes and external services. Digital transformation has emerged as shorthand for businesses seeking to reinvent themselves on a foundation of digital data and technology. Whether it’s digitizing paper records, creating new electronic channels to market or analyzing data to produce new insights, companies can use technology to improve an existing business process. Agile development has played a crucial role in many of these digitalization programs, especially the creation of IT applications. The successful rollout of these software-focused projects has encouraged engineers to start thinking about how Agile techniques can be used in other areas of IT, including digital transformation initiatives.


How generative AI can hurt cloud operations

Generative AI algorithms can be incompatible with existing cloud computing systems, leading to integration issues. This can delay the deployment of generative AI algorithms and cause problems with system performance or efficiency. ... Generative AI algorithms can exhibit unpredictable behavior, which leads to unexpected outcomes. This can result in system errors, degraded system performance, and other issues that are impossible to predict. I suspect we’ll get better at predicting behavior as we learn more about generative AI system operations, but the learning curve will be painful. I’ve already had some generative AI systems pulled off cloud systems due to unpredictable behavior and, what’s worse, unpredictable cloud computing bills. Generative AI is an unstoppable force in the enterprise technology space. It’s yet another technology made more accessible and affordable by cloud computing, and the easy availability of this technology will reverberate through the marketplace. Generative AI will become a technology that allows businesses to succeed by out-innovating their competition.


Data, AI and automation will never replace humans. Fact

While these technologies are nothing new, they do continue to advance at pace. This presents the opportunity to leverage them to help solve some of the biggest challenges we face in society as well as in business. But, we will only succeed when we remain as the masters of the technology, not the servants. Using AI and automation to empower people, not replace them, allows organisations to be data-driven yet technology-enabled and people-centric. Where these pieces of software are used as tools to help humans do their best work and remove the drudgery of manual tasks. And it makes complete sense. Because there will always be a moment of truth when a human must be involved at a crucial point. An automated process might take someone 75 per cent of the way, but a person needs to complete the rest. And if they can put all their effort into that 25 per cent, the result will be a better outcome for the employee, the customer and the organisation that brings them together. Ultimately, those who try to remove people from the equation are destined to fail. 


How artificial intelligence can inform decision-making

To implement AI for decision-making, organizations need a modern data infrastructure to support new data types and often massive amounts of data. Many organizations are moving to the cloud for data management and making use of data engineers and newer pipeline tools to help integrate data and make sure it is trustworthy. They are also hiring DevOps teams to deploy models and monitor them in production. According to a TDWI Best Practices Report, 67 percent of organizations deploying AI technologies today state that AI projects are built by data scientists and are deployed into production by DevOps teams. Some organizations are also using augmented intelligence applications, where AI is infused into the software to automate functionality, such as data cleansing, deriving insights, or building predictive models. In addition to hiring specialists, organizations must also encourage all employees to build excitement and trust. It is essential to involve stakeholders in the design and implementation of AI systems to ensure that they understand how the systems work and are comfortable using them.

CDOs Want Increased Investments in Data Management, Cloud

“The challenge comes from managing the increased volume and variety of data, the need to integrate it into business intelligence, and most notably, the need to keep data infrastructure updated to ensure various data types are supported and organizations are following data compliance considerations,” he says. He explains CDOs are seeking greater investments in data management because they need to identify how they can create a measurable impact for the customer experience. “This can only be done by collecting and analyzing data, which can often be tedious and require large sums of time and resources,” Adya says. “Hence, many are doubling down on data management resources to get the job done quicker and easier.” As the digital ecosystem evolves, enterprises are being forced to innovate and rely on cloud to accelerate digital transformation. “Effectively leveraging data through the cloud gives organizations a competitive edge and increases resiliency by being able to respond to disruptions and spot new market opportunities through intelligent data,” he explains.



Quote for the day:

"To have long term success as a coach or in any position of leadership, you have to be obsessed in some way." -- Pat Riley

Daily Tech Digest - April 06, 2023

AI might not steal your job, but it could change it

People whose jobs deal with language could, unsurprisingly, be particularly affected by large language models like ChatGPT and GPT-4. Let’s take one example: lawyers. I’ve spent time over the past two weeks looking at the legal industry and how it’s likely to be affected by new AI models, and what I found is as much cause for optimism as for concern. The antiquated, slow-moving legal industry has been a candidate for technological disruption for some time. In an industry with a labor shortage and a need to deal with reams of complex documents, a technology that can quickly understand and summarize texts could be immensely useful. So how should we think about the impact these AI models might have on the legal industry? ... AI in law isn’t a new trend, though. It has already been used to review contracts and predict legal outcomes, and researchers have recently explored how AI might help get laws passed. Recently, consumer rights company DoNotPay considered arguing a case in court using an argument written by AI, known as the “robot lawyer,” delivered through an earpiece.


Microservice Architecture Key Concepts

Generally, you can use a message broker for asynchronous communication between services, though it’s important to use one that doesn’t add complexity to your system and possible latency if it doesn’t scale as the messages grow. Version your APIs: Keep an ongoing record of the attributes and changes you make to each of your services. “Whether you’re using REST API, gRPC, messaging…” wrote Sylvia Fronczak for OpsLevel, “the schema will change, and you need to be ready for that change.” A typical pattern is to embed the API (application programming interface) version into your data/schema and gracefully deprecate older data models. For example, for your service product information, the requestor can ask for a specific version, and that version will be indicated in the data returned as well. Less chit-chat, more performance: Synchronous communications create a lot of back and forth between services. If synchronous communication is really needed, this will work okay for a handful of microservices, but when dozens or even hundreds of microservices are in play, synchronous communication can bring scaling to a grinding halt.


'Silent Success': How to master the art of quiet hiring for your business

“Ironically, quiet hiring is neither ‘quiet’ nor is any ‘hiring’ involved in it in the traditional sense,” says Bensely Zachariah, Global Head of Human Resources at Fulcrum Digital, a business platform and digital engineering services company. Quiet hiring entails companies upskilling their existing employees and moving them to new roles or new sets of responsibilities, on a temporary or in some cases, permanent basis to meet the ever-evolving demands of the business environment. Zachariah says: “Quiet hiring is essentially the opposite of ‘quite quitting’, a buzzword during 2022, which, in simple words, means doing the bare minimum for what it takes to keep your job. The concept behind quiet hiring is rewarding high-performing individuals with more challenging roles, pay rises, bonuses, or promotions. This is not a new concept per se, in fact it is an age-old practice which was referred to as ‘facilitated talent mobility’ or ‘career advancement’ where organisations have spent considerable time and resources to facilitate upskilling/cross-skilling employees to give them new roles/avenues for work.”


AI and privacy concerns go hand in hand

Whether personal information is publicly available or not, its collection and use is still subject to the Privacy Act. While it’s on businesses to operate within the law, it pays for the public to upskill themselves and be savvy about what information they’re posting, and where. We know that criminals are becoming an even greater threat online because cybersecurity breaches are increasing and result in costly hacks of personal information. AI can be used to supercharge these criminals, leading to more privacy breaches, and making it even harder for cybersecurity systems to protect your information or for post-breach measures such as injunctions to protect stolen data that criminals may make available online. Powerful AI can aggregate data to a much greater degree, much more swiftly than humans can, meaning AI can potentially identify people that would otherwise not be identifiable through more time-intensive methods. Even seemingly benign online interactions could reveal more about you than you ever intended.


The Benefits of a Streaming Database

Experienced engineers understand that no software stack or tooling is perfect and comes with a series of trade-offs for each specific use case. With that in mind, let’s examine the particular trade-offs inherent to streaming databases to understand better the use cases they align best with. Incrementally updated materialized views – Streaming databases build on different dataflow paradigms that shift limitations elsewhere and efficiently handle incremental view maintenance on a broader SQL vocabulary. Other databases like Oracle, SQLServer and Redshift have varying levels of support for incrementally updating a materialized view. They could expand support, but will hit walls on fundamental issues of consistency and throughput. True streaming inputs – Because they are built on stream processors, streaming databases are optimized to individually process continuous streams of input data (e.g., messages from Kafka). Scaling streaming inputs involves batching them into larger transactions, slowing down data and losing granularity. In traditional databases (especially OLAP data warehouses), larger, less frequent batch updates are more performant.


6 steps to measure the business value of IT

A challenge for determining the value contribution is the selection of suitable key figures. According to the study, IT departments today primarily use technical and IT-related metrics. That is legitimate, but in this way, there’s no direct connection to the business. Plus, there’s often a lack of affinity for meaningful KPIs, both in IT and in the specialist departments, says Jürgen Stoffel, CIO at global reinsurer Hannover Re. Therefore, in practice, only a few metrics suitable for both sides would be found, and the result is the IT value proposition is often unseen. “A consistent portfolio of metrics coordinated with the business would be helpful,” says Thomas Kleine, CIO of Pfizer Germany, and Held from the University of Regensburg adds: “Companies have to get away from purely technical key figures and develop both quantitative and qualitative metrics with a business connection.” In order to make progress along this path, the consultants developed a process model with several development and evaluation phases, using current scientific findings and speaking to CIOs.


Strategic risk analysis is key to ensure customer trust in product, customer-facing app security

Assessing risk requires identifying baseline security criteria around key elements such as customer contracts and regulatory requirements, Neil Lappage, partner at LeadingEdgeCyber and ISACA member, tells CSO. “From the start, you've got things you’re committed to such as requirements in customer contracts and regulatory requirements and you have to work within those parameters. And you need to understand who your interested parties are, the stakes they've got in the game, and the security objectives.” The process of defining the risk profile of an organization also requires strong collaboration among IT, cybersecurity, and risk professionals. “How the organization knows the risk profile of the organization involves the cybersecurity team working with the IT and reporting to the business so these three things — cyber, IT and risk — work in unison,” he says. “If cyber sits isolated from the rest of the business, if it doesn't understand the business, the risk is not optimized.”


FBI Seizes Genesis Cybercriminal Marketplace in 'Operation Cookie Monster'

The seizure of Genesis was a collaborative effort between international law enforcement agencies and the private sector, according to the notice, which included the logos of European law enforcement agency Europol; Guardia Civil in Spain; Polisen, the police force in Sweden; and the Canadian government. The FBI also is seeking to speak those who've been active on the Genesis Market or who are in touch with administrators of the forum, offering an email address for people to contact the agency. ... Indeed, Genesis demonstrated the "growing professionalization and specialization of the cybercrime sphere," with the site earning money by gaining and maintaining access to victim systems until administrators could sell that access to other criminals, according to Sophos. The various tasks that the Genesis Market bots could undertake included large-scale infection of consumer devices to steal digital fingerprints, cookies, saved logins, and autofill-form data stored on them. The marketplace would package up that data and list it for sale, with prices ranging from less than $1 to $370, depending on the amount of embedded data that the packages contained.


Beyond Hype: How Quantum Computing Will Change Enterprise IT

“If you have a problem that can be put into an algorithm that leverages the parallelism of quantum computers, that’s where you can get a very dramatic potential speed up,” Lucero says. “If you have a problem that for every additional variable, you add to the problem, and doubles the computational complexity -- that is probably a good candidate to be adapted into a quantum computational problem.” The so-called “traveling salesperson problem,” for example, would be a fitting problem for a quantum computer. The algorithm asks the following: “Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city.” This and other combinatorial optimization problems are important to theoretical computer science because of the complexity of variations involved. Used as a benchmark, the algorithm can be applied to planning, logistics, microchip manufacturing and even DNA sequencing. In theory, a quantum computer could make quick work of this complicated algorithm and provide greater efficiency for programming.


How to build next-gen applications with serverless infrastructure

When explaining the benefits of serverless infrastructure and containers, I'm often asked why you need containers at all. Don't instances already provide isolation from underlying hardware? Yes, but containers provide other important benefits. Containers allow users to fully utilize virtual machine (VM) resources by hosting multiple applications (on distinct ports) on the same instance. As a result, engineering teams get portable runtime application environments with platform-independent capabilities. This allows engineers to build an application once and then deploy it anywhere, regardless of the underlying operating system. ... Implementing event-driven architecture (EDA) can work for serverless infrastructure through either a publisher/subscriber (pub/sub) model or an event-stream model. With the pub/sub model, notifications go out to all subscribers when events are published. Each subscriber can respond according to whatever data processing requirements are in place. On the event-stream model side, engineers set up consumers to read from a continuous flow of events from a producer. 



Quote for the day:

"When I finally got a management position, I found out how hard it is to lead and manage people." -- Guy Kawasaki