Daily Tech Digest - December 19, 2022

7 ways CIOs can build a high-performance team

“People want to grow and change, and good business leaders are willing to give them the opportunity to do so,” adds Cohn. Here, you can get HR involved, encouraging them to bring their expertise and ideas to the table to help you come up with the right approach to training and employee development. In addition, it’s important to remember that an empathetic leader understands that people come from different places and therefore won’t grow and develop in the same manner. Modern CIOs must approach upskilling and training with this reality in mind, advises Benjamin Marais, CIO at financial services company Liberty Group SA. You also need to create opportunities that expose your employees to what’s happening outside the business, suggests van den Berg. This is especially true where it pertains to future technologies and skills because if teams know what’s out there, they better understand what they need to do to keep up. Given the rise in competition for skills in the market, you have to demonstrate your best when trying to attract top talent and retain them, stresses Cohn. 


10 Trends in DevOps, Automated Testing and More for 2023

Developers and QA professionals are some of the most sought-after skilled laborers who are acutely aware of the value they provide to organizations. As we head into next year, this group will continue to leverage the demand for their skills in pursuit of their ideal work environment. Companies that do not consider their developer experience and force pre-pandemic systems onto a hybrid-first world set themselves up for failure, especially when tools for remote and virtual testing and quality assurance are readily available. Developer teams also need to be equally equipped for success through the tools and opportunities that can help ensure an innate sense of value to the organization – and if they don’t have the tools they need, these developers will find them elsewhere. ... We’re starting to see consolidation in both the market and in the user personas we’re all chasing. Testing companies are offering monitoring, and monitoring companies are offering testing. This is a natural outcome of the industry’s desire to move toward true observability: deep understanding of real-world user behavior, synthetic user testing, passively watching for signals and doing real-time root cause analysis—all in service of perfecting the customer experience.


The beautiful intersection of simulation and AI

Simulation models can synthesize real-world data that is difficult or expensive to collect into good, clean and cataloged data. While most AI models run using fixed parameter values, they are constantly exposed to new data that may not be captured in the training set. If unnoticed, these models will generate inaccurate insights or fail outright, causing engineers to spend hours trying to determine why the model is not working. ... Businesses have always struggled with time-to-market. Organizations that push a buggy or defective solution to customers risk irreparable harm to their brand, particularly startups. The opposite is true as “also-rans” in an established market have difficulty gaining traction. Simulations were an important design innovation when they were first introduced, but their steady improvement and ability to create realistic scenarios can slow perfectionist engineers. Too often, organizations try to build “perfect” simulation models that take a significant amount of time to build, which introduces the risk that the market will have moved on.


What is VPN split tunneling and should I be using it?

The ability to choose which apps and services use your VPN of choice and which don't is incredibly powerful. Activities like remote work, browsing your bank's website, or online shopping via public Wi-Fi can definitely benefit from the added security of a VPN, but other pursuits, like playing online games or streaming readily available content, can be hurt by the slight delay VPNs may add to your traffic. The modest decrease to your connection speed is barely noticeable for browsing, but can be disastrous for online games. Being able to simultaneously connect to sensitive sites and services through your secure VPN, and to non-sensitive games and apps means you won't constantly need to enable and disable your VPN connection when switching tasks. This is important as forgetting to enable it at the wrong time could leave you exposed to security risks. ... Split tunneling divides your network traffic in two. Your standard, unencrypted traffic continues to flow unimpeded down one path, while your sensitive and secured data gets encrypted and routed through the VPN's private network. It's like having a second network connection that's completely separate, a tiny bit slower, but also far more secure.


Why don’t cloud providers integrate?

Although it’s not an apples-to-apples comparison, Google’s Athos enables enterprises to run applications across clouds and other operating environments, including ones Google doesn’t control. As with Amazon DataZone, it’s very possible to manage third-party data sources. One senior IT executive from a large travel and hospitality company told me on condition of anonymity, “I’m sure [cloud vendors] can integrate with third-party services, but I suspect that’s not a choice they’re willing to make. For instance, they could publish some interfaces for third parties to integrate with their control plane as well as other means in the data plane.” Integration is possible, in other words, but vendors don’t always seem to want it. This desire to control sometimes leads vendors down roads that aren’t optimal for customers. As this IT executive said, “The ecosystem is being broken. Instead of interoperating with third-party services, [cloud vendors often] choose to create API-compatible competing services.” He continued, “There is a zero-sum game mindset here.” Namely, if a customer runs a third-party database and not the vendor’s preferred first-party database, the vendor has lost.


How RegTech helps financial services providers overcome regulation challenges

Two main types of RegTech capabilities are helping financial service institutions stay compliant: software that encompasses the whole system — for example a full client onboarding cycle — and software that manages a particular process, such as reporting or document management. Hugo Larguinho BrĂ¡s explains: “The technologies that handle the whole process from A to Z are typically heavier to deploy, but they will allow you to cover most of your needs. These are also more expensive and often more difficult to adapt in line with a company’s specificities.” “Meanwhile, those technologies that treat part of the process can be combined with other tools. While this brings more agility, the need to find and combine several tools can also turn your target model more complex to run.” “We see more and more cloud and on-premises solutions available to asset management and securities companies, from software-as-a-service (SaaS) and platform-as-a-service (PaaS) deployed in-house, to solutions combined to outsourced capabilities ...”


What You Need to Know About Hyperscalers

Current hyperscaler adopters are primarily large enterprises. “The speed, efficiencies, and global reach hyperscalers can provide will surpass what most enterprise organizations can build within their own data centers,” Drobisewski says. He predicts that the partnerships being built today between hyperscalers and large enterprises are strategic and will continue to grow in value. “As hyperscalers maintain their focus on lifecycle, performance, and resiliency, businesses can consume hyperscaler services to thrive and accelerate the creation of new digital experiences for their customers,” Drobisewski says. ... Many adopters begin their hyperscaler migration by selecting the software applications that are best suited to run within a cloud environment, Hoecker says. Over time, these organizations will continue to migrate workloads to the cloud as their business goals evolve, he adds. Many hyperscaler adopters, as they become increasingly comfortable with the approach, are beginning to establish multi-cloud estates. “The decision criteria is typically based on performance, cost, security, access to skills, and regulatory and compliance factors,” Hoecker notes.


UID smuggling: A new technique for tracking users online

Researchers at UC San Diego have for the first time sought to quantify the frequency of UID smuggling in the wild, by developing a measurement tool called CrumbCruncher. CrumbCruncher navigates the Web like an ordinary user, but along the way, it keeps track of how many times it has been tracked using UID smuggling. The researchers found that UID smuggling was present in about 8 percent of the navigations that CrumbCruncher made. The team is also releasing both their complete dataset and their measurement pipeline for use by browser developers. The team’s main goal is to raise awareness of the issue with browser developers, said first author Audrey Randall, a computer science Ph.D. student at UC San Diego. “UID smuggling is more widely used than we anticipated,” she said. “But we don’t know how much of it is a threat to user privacy.” ... UID smuggling can have legitimate uses, the researchers say. For example, embedding user IDs in URLs can allow a website to realize a user is already logged in, which means they can skip the login page and navigate directly to content.


Bring Sanity to Managing Database Proliferation

How can you avoid being a victim of the bow wave of database proliferation? Recognize that you can allocate your resources in a way that benefits both your bottom line and your stress level by consolidating how you run and manage modern databases. Investing heavily in self-managing the legacy databases used in high volume by many of your people makes a lot of sense. Database workloads that are typically used for mission-critical transaction processing, such as IBM DB2 in financial services, are subject to performance tuning, regular patching and upgrading by specialized database administrators in a kind of siloed sanctum sanctorum. Many organizations will hire an in-house Oracle or SAP Hana expert and create a team, ... But what about the 40 other highly functional, highly desirable cloud databases in your enterprise that aren’t used as often? Do you need another 20 people to manage them? Open source databases like MySQL, MongoDB, Cassandra, PostgreSQL and many others have gained wide adoption, and many of their use cases are considered mission-critical. 


An Ode to Unit Tests: In Defense of the Testing Pyramid

What does the unit in unit tests mean? It means a unit of behavior. There's nothing in that definition dictating that a test has to focus on a single file, object, or function. Why is it difficult to write unit tests focused on behavior? A common problem with many types of testing comes from a tight connection between software structure and tests. That happens when the developer loses sight of the test goal and approaches it in a clear-box (sometimes referred to as white-box) way. Clear-box testing means testing with the internal design in mind to guarantee the system works correctly. This is really common in unit tests. The problem with clear-box testing is that tests tend to become too granular, and you end up with a huge number of tests that are hard to maintain due to their tight coupling to the underlying structure. Part of the unhappiness around unit tests stems from this fact. Integration tests, being more removed from the underlying design, tend to be impacted less by refactoring than unit tests. I like to look at things differently. Is this a benefit of integration tests or a problem caused by the clear-box testing approach? What if we had approached unit tests in an opaque-box approach?



Quote for the day:

"Strategy is not really a solo sport even if you_re the CEO." -- Max McKeown

Daily Tech Digest - December 18, 2022

Shift Left Testing in Microservices Environments

The waterfall model of development involved the explicit passing of responsibilities between highly specialized design, development, QA, and release teams. It also involved lengthy feedback loops. Scrum and agile methodologies made the entire SDLC more flexible and nimble by introducing sprints and allowing more frequent iterative development and delivery. Further, DevOps and DevSecOps focus on removing the silos between development, operations, and security through tooling and automation. As a result, the time to market and quality have improved dramatically. Adding shift left testing into the mix better positions teams to handle the broad range of responsibilities from the design stage through the maintenance stage as effectively as possible. Shift left testing focuses on prevention rather than detection. Shift left benefits include the following:Increase efficiency by eliminating bugs earlier in the SDLC: Reduce human errors and associated costs; Increase delivery speed and reduce the time between releases; Improve the quality of software; Gain a competitive advantage.


Cyber Security Blue Team: Roles, Exercise, Tools & Skills

The blue teams are responsible for establishing security measures around an organization's key assets. Therefore, the blue team conducts a risk assessment by identifying threats and weaknesses these threats can exploit after obtaining data and documenting what needs to be protected. Blue teams perform risk assessments. They identify critical assets, determine what impact their absence will have on the business, and document the importance of these assets. Following that, employees are educated on security procedures, and stricter password policies are implemented to tighten access to the system. A monitoring tool is often installed to log and check access to systems. As part of regular maintenance, blue teams will perform DNS audits, scan internal and external networks for vulnerabilities, and capture network traffic samples. Senior management has a crucial role in this stage since only they can accept a risk or implement mitigating controls. As a result, security controls are often selected based on their cost-benefit ratio.

 

An AI-Stretch Of The Imagination

Think about yourself as a customer for a moment, about how many businesses have your personal information housed in their data warehouses. Even if they have your permission to store your details and notify you of relevant promotional offers, this does not guarantee your information will not be leaked at some point. Data leaks are not going away any time soon, so businesses focused on enhancing personal and relevant customer experiences—while remaining committed to protecting your privacy—are fast waking up to the value of synthesizing their structured data. By structured data, I mean the hundreds/thousands/millions of rows of data that live in places like databases or CSV files. We’re talking about billions of data points, and this number continues to grow. Here, AI trains on the original data and generates a synthetic version of that data which is privacy safe, with zero links back to any original data points. Not only is it statistically representative, but the data can be modified during the synthesization process; for example, an existing bias can be corrected to produce a more balanced data set.


DNS Is Conduit Into Air-Gapped Networks, Say Researchers

An air-gapped network's DNS server connected to the enterprise IT system has connections to the public DNS system on the internet even if it's kept behind a firewall. That's because of the nature of the DNS system, Uriel Gabay, a Pentera security researcher, tells Information Security Media Group. The DNS is the decentralized system that translates domain names into the numerical IP addresses needed for routing across a network. A large majority of organizations surveyed by IDC earlier this year said they experienced some type of DNS attack in 2022. Most DNS traffic is sent over the UDP protocol, meaning there isn't built-in error detection for packets sent and received as there is in TCP. It's the "received" part of a DNS response that poses a risk. Given the possibility for a DNS request to trace the hops from an air-gapped network to the enterprise network to a public DNS server, a datagram originating from outside the air gap is ultimately received by a computer on the inside. "You allow the response to come into your organization because this is the meaning of allowing the protocol.


10 Most-Liked Programming Languages that Humans Will Use in 2050

JavaScript is a powerful programming languages that is a vital part of the World Wide Web. 98 percent of several sites use it as a client-side programming language. Originally utilized only to build internet browsers, JavaScript is currently used for server-side website deployments and non-internet browser applications. ... Java is a Most Liked programming language that is widely utilized for creating client-server applications. The main benefit of Java is that it is treated as a loosely connected programming language that can be simply worked on any platform and can support Java. Due to this, Java is referred to as the programming language that enable its users to “write once and implement anyplace.” ... Python is simply to learn, object-oriented, and flexible language. It is the best choice of most developers who wish to work on Machine Learning and Artificial Intelligence. It is even utilized for frontend and backend development, web robotization, PC vision, and code testing. With the growth in prerequisite and demand for Data Science and Artificial Intelligence, Python is popular for the upcoming years.


3 types of channels in Microsoft Teams

Private Channels can be accessed by those members of the team who were included in the Private Channel. And this is very critical and important to understand. You cannot invite just about anyone into Private Channel. You can only invite users who are already a member of the overall Team. In other words, using the example I mentioned above, I can only include John and Mary in the private channel, who are already members of the Team. I cannot invite David, who is not part of my Team in the first place. So think of Private Channels as almost a separate membership roster available in the overall Team roster (membership). ... The Shared Channel is represented by a “shared” icon on the channel name and is only visible to the members of that shared channel only. It would be invisible to the users who are regular team members and who are not members of that channel. ... You probably already guessed that the file management model for the Shared Channel resembles that of a Private Channel. Just like with Private Channel, a separate SharePoint site is created. It has the same naming convention: [name of the team]-[name of the shared channel].


Accenture shares 9 cybersecurity predictions for 2023

“As the cyber threat landscape evolves, we will see the number of cyber events and organizations held to ransom continue to rise,” said James Nunn-Price, growth markets security lead at Accenture. “With this increase, organizations will continue to make significant investments in their situational awareness, threat-based security monitoring, incident response and crisis management practices.” However, many organizations, including those with mature practices, are still overly reliant on people, and that can slow detection and responses, he said. For example, Accenture found that even when security monitoring teams took action to mitigate attacks, it was still too late to stop data exfiltration. Attackers are using the latest tools and automated technologies to strike fast and hard — to exfiltrate key data and damage infrastructure within minutes. “In 2023, more organizations will prioritize fully automated response technology, as the impacts from a successful breach now far outweigh the risks of these newer technologies, which in turn, frees their people up to focus on how the business can become more cyber resilient, said Nunn-Price.


Meta's Data2vec 2.0: Second time around is faster

The second time around, Meta's scientists made the program faster and, in a few cases, more accurate on benchmark tests of machine learning tasks. "Data2vec 2.0 shows that the training speed of self-supervised learning can be substantially improved with no loss in downstream task accuracy," write authors Alexei Baevski, Arun Babu, Wei-Ning Hsu, and Michael Auli, four of the authors of the original Data2vec paper, in this new work, Efficient Self-supervised Learning with Contextualized Target Representations for Vision, Speech and Language, posted on arXiv. The singular accomplishment of this second Data2vec is to reduce the time it takes to train Data2vec. Training a neural net is typically measured in terms of "epochs," meaning the number of times the neural net is given the training examples. It can also be measured by the wall clock time, the literal hours, minutes, and days counted from start to finish. "Experiments show that Data2vec 2.0 can reach the same accuracy as many popular existing algorithms in 2-16x the training speed," they write.


How The Metaverse Could Impact Businesses In The Not-Too-Distant Future

For engineering, procurement and construction (EPC) companies like my company, Black & Veatch (BV), the metaverse opens a door of opportunity. By placing a top priority on developing and maintaining a strong safety culture, these new technologies provide virtual training experiences that can be designed to closely match real-world situations. Using a game-styled approach, workers can practice safety procedures in the metaverse and be better prepared to work on construction sites. The metaverse can be a new creative way for companies to address a variety of hiring and retention challenges in today’s changing work world. According to Indeed, 88% of employers say they now conduct video interviews with candidates. Most companies said this provides them with an opportunity to engage more leaders in the interview process and allows for more flexibility in scheduling. Another way the metaverse could impact talent management is by using virtual worlds to assess and test skills and performance. 


Dozens of cybersecurity efforts included in this year’s US NDAA

FedRAMP Authorization Act - The bill includes a provision to codify into law and update the Federal Risk and Authorization Management Program (FedRAMP). The FedRAMP program is operated by the General Services Administration (GSA) to provide a standardized, government-wide approach to security assessment, authorization, and continuous monitoring for cloud products and services used by federal government agencies. Protection of critical infrastructure - This provision enhances the military’s ability to step to conduct actions in defense of attacks on critical infrastructure. It states that if “the President determines that there is an active, systematic, and ongoing campaign of attacks in cyberspace by a foreign power against the Government or the critical infrastructure of the United States,” the President may authorize the secretary of defense, acting through the commander of Cybercom, to conduct military cyber activities or operations pursuant to existing statutory war powers in foreign cyberspace to deter, safeguard, or defend against such attacks.



Quote for the day:

"Leadership is based on a spiritual quality; the power to inspire, the power to inspire others to follow." -- Vince Lombardi

Daily Tech Digest - December 17, 2022

Innovation vs. execution: You have to have both

Innovation requires a major marketing commitment. Humans don’t like change, but there’s no way to innovate without introducing change. You have to convince potential buyers that the benefits of the change are worthwhile. If you don’t, you could have the best product around and still falter in the market. Take the Microsoft Zune, for example. It was far more innovative than the iPod at the time. It was more robust, played video, allowed for legal music sharing, and it came in colors. But Microsoft didn’t market those differences, the design was less attractive, The Zune required a subscription, and getting video to work was … problematic. Microsoft fixed the execution problems made the Zune better looking, got video to work, and even made the subscription more compelling. But it cut back on marketing and lost even the fans it had. Innovation needs both execution and marketing to make a difference – and that the most innovative products have the highest execution and marketing needs. Tesla is popular because it hit a niche otherc armakers didn’t take seriously, the ecologically conscious buyer. And its unique vehicle (and strong customer advocacy) allowed it to take market leadership.


The Key Role of Citizen Developers in Creating Digital Transformation

Citizen developers have the potential to create meaningful DX without any of these burdens. They are only interested in the core definition of DX, making things work better, faster, less expensively to help people do a better job and enjoy doing it much more. Since they always start from the processes already in use, citizen developers can be more targeted more accurately than their code-cutting counterparts. New hardware, software, or infrastructure are only considered part of the initiative occasionally. In many cases, the end-product from an IDE may ideally suit the need and be used as-is. In worst case, the resulting program is given to the professionals to expand upon, meaning they get a head-start on development. It's based on deep knowledge of the user community, and it's already partially baked! Developers need spend much less time in discovery and development. Forrester suggests that this partnership approach, first citizen developer then professional developer, has "the potential to make software development as much as 10-times faster than traditional methods."


Why Employee-Targeted Digital Risks Are The Next Frontier Of Enterprise Cybersecurity

Employee-Targeted Digital Risk represents the threat surface of attacks that come to the enterprise via the team’s personal devices, personal accounts and digital lives. These attacks take a variety of forms, but what they have in common is that they circumvent the extensive cybersecurity controls companies have in place by targeting accounts and devices outside the company’s purview and then using that access to move laterally to company systems and data. Sometimes these incidents start with a specific target company, and bad actors will identify a vulnerable employee. In other cases, these incidents start with vulnerable or exposed personal data, and target companies are chosen opportunistically. We in the industry have been speaking on this extensively for several years—for example, Martin Casado of Andressen Horowitz dug into this problem in 2019 in The New Attack Surface is Your Life, and my company and Strategy of Security collaborated on a recent whitepaper—but only recently has the threat surface become more talked about. 


Microservices Deployment Patterns

In many cases, microservices need their own space and a clearly separated deployment environment. In such cases, they can’t share the deployment environment with other services or service instances. There may be a chance of resource conflict or scarcity. There might be issues when services written in the same language or framework but with different versions can’t be co-located.In such cases, a service instance could be deployed on its own host. The host could either be a physical or virtual machine. In such cases, there wouldn’t be any conflict with other services. The service remains entirely isolated. All the resources of the VM are available for consumption by the service. It can be easily monitored. ... In many cases, microservices need their own, self-contained deployment environment. The microservice must be robust and must start and stop quickly. Again, it also needs quick upscaling and downscaling. It can’t share any resources with any other service. It can’t afford to have conflicts with other services. It needs more resources, and the resources must be properly allocated to the service.


Are robots too insecure for lethal use by law enforcement?

The law enforcement agency argued that the robots would only be used in extreme circumstances, and only a few high-ranking officers could authorize their use as a deadly force. SFPD also stressed that the robots would not be autonomous and would be operated remotely by officers trained to do just that. The proposal came about after the SFPD struck language from a policy proposal related to the city’s use of its military-style weapons. The excised language, proposed by Board of Supervisors Rules Committee Chair Aaron Peskin, said, “Robots shall not be used as a use of force against any person.” The removal of this language cleared the path for the SFPD to retrofit any of the department’s 17 robots to engage in lethal force actions. Following public furor over the prospects of “murder” robots, the Board of Supervisors reversed itself a week later and voted 8-3 to prohibit police from using remote-controlled robots with lethal force. The supervisors separately sent the original lethal robot provision of the policy back to the Board’s Rules Committee for further review, which means it could be brought back again for future approval.


Why Memory Allocation Resilience Matters in IoT

After all, modern computers, tablets, and servers count so much space that memory often seems like an infinite resource. And, if there is any trouble, a memory allocation failure or error is so unlikely that the system normally defaults to program exit. This is very different, however, when it comes to the Internet of Things (IoT). In these embedded connected devices, memory is a limited resource and multiple programs fight over how much they can consume. The system is smaller and so is the memory. Therefore, it is best viewed as a limited resource and used conservatively. ... In modern connected embedded systems, malloc is more frequently used and many embedded systems and platforms have decent malloc implementation. The reason for the shift is that modern connected embedded systems do more tasks and it is often not feasible to statically allocate the maximum required resources for all possible executions of the program. This shift to using malloc actively in modern connected embedded systems requires more thorough and systematic software testing to uncover errors.


Artificial Intelligence could steal your restaurant job. Here's how

AI-powered voice bots such as Tori will join other tech used in quick-service restaurants. Tori is a front-of-house "employee," but other robotic restaurant workers cook, clean, and serve food. Robotics and AIs in the food industry are a direct result of a crippling labor shortage, as restaurants around the country have hundreds of thousands of fewer employees than they did two years ago, according to the US Labor Department. Other uses for AI in the restaurant industry include leveraging AI-powered vision to monitor drive-thru efficiency. Companies like Plainsight offer their services to help restaurants mitigate lost revenue due to customers leaving the drive-thru because of long wait times. ... AI can also help restaurants reduce waste, which helps decrease food costs and the burden of food waste on the environment. Companies such as Winnow deliver AI-powered software to help restaurants decrease their food waste. The technology specialist created a kitchen tool called Winnow Vision, which monitors what food is thrown in the trash and automatically collects that data. It uses that information to notify kitchen staff about how much of what food is being wasted throughout the day.


New AI Bot Could Take Phishing, Malware to a Whole New Level

Since the cybercrime market for ransomware as a service is already organized to outsource malware development, tools such as ChatGPT could make the process even easier for criminals entering the market. "I have no doubt that ChatGPT and other tools like this will democratize cybercrime," says Suleyman Ozarslan, security researcher and co-founder of Picus Security. "It's bad enough that ransomware code is already available for people to buy off the shelf on the dark web. Now virtually anyone can create it themselves." In testing ChatGPT, Ozarslan instructed the bot to write a phishing email, and it spat out a perfect mail within seconds. "Misspellings and poor grammar are often tell-tale signs of phishing, especially when attackers are targeting people from another region. Conversational AI eliminates these mistakes, making it quicker to scale and harder to spot them," he says. While the terms of service for ChatGPT prohibit individuals from using the software for nefarious purposes, Ozarslan prompted the bot to write the phishing email by telling it the code would be used for a simulated attack.


California’s finance department confirms breach as LockBit claims data theft

California’s Department of Finance has confirmed it’s investigating a “cybersecurity incident” after the prolific LockBit ransomware group claims to have stolen confidential data from the agency. The California Office of Emergency Services (Cal OES) in a statement on Monday described the threat as an “intrusion” that was “identified through coordination with state and federal security partners.” The statement did not provide any specifics about the nature of the incident, who was involved or whether any information had been stolen. The California Department of Finance did not respond to TechCrunch’s questions prior to publication. “While we cannot comment on specifics of the ongoing investigation, we can share that no state funds have been compromised, and the department of finance is continuing its work to prepare the governor’s budget that will be released next month,” the statement said. While state officials remain tight-lipped about the incident, the notorious LockBit ransomware gang on Monday claimed responsibility for the attack.


Why diversity and inclusion matter for technology

There are ways in which technology firms can help improve their diversity and inclusion. Jinny Mitchell-Kent, chief operating officer at digital agency Great State, believes more needs to be done to encourage applications from different groups in the first place. “Considering where we market roles, what language we use in our job descriptions and what our hiring process is like can facilitate receiving more diverse candidates,” she says. “For example, neurodivergent people may be more receptive to an online job advert that is not on a hugely colourful background with lots of moving components.” Training existing staff can also help to ensure individuals avoid unconscious bias and become advocates for change, believes Suki Sandhu OBE, CEO and founder of diversity consultancy INvolve and executive recruiter Audeliss. “Training and workshops are critical to contextualise issues surrounding race, gender and LGBTQ+ communities within a workplace, and provide employees with a deeper understanding of diversity and inclusion’s importance and their role in driving action,” he says.



Quote for the day:

"Leadership matters more in times of uncertainty." -- Wayde Goodall

Daily Tech Digest - December 15, 2022

How acceptable is your acceptable use policy?

Your AUP needs to be auditable and enforceable—but there’s a tricky balance between protecting employees and making them feel like they’re working for an authoritarian regime. “It should be written to the end user rather than the technical person who works in security,” says Michaels. “One of the pitfalls that we see in the development of policies is the security leader will either own the creation of the policy or delegate it to somebody on their team, and they won’t go out and source feedback and check that they’re on the right track.” More mature security programs source feedback and have closer partnerships with HR and the other functions in the business. But many companies are “still trying to do the basic blocking and tackling,” Michaels says. “They’re still more focused on the technology and the process rather than the people that they’re impacting.” The AUP should be clear, concise, and easy to understand—not technobabble or legalese. But getting employee buy-in could also come down to something as simple as word choice.


The power of incremental momentum

Companies can get the type of disruptive innovation they need to survive and create lasting change, without moving fast and breaking things. This can be achieved by building momentum for change incrementally. It’s not a new approach, but it can get overlooked when a sense of urgency arises and appears to dictate swift action. ... Incremental momentum has been successfully used in other endeavors. Almost 120 years after Roosevelt’s work, a team of environmental scientists in Finland surveyed the success of incremental change in achieving the country’s sustainability goals. They concluded: “The strengths of small wins include the ability to react to the constantly changing, dynamic conditions…and to deepen trust, commitment and understanding among people.” The report continued, small wins “can facilitate progress and interfere with old routines by bringing about small steps that may result in continuous transformational change and generate radical changes in the long run.” ... The risks were great. The SAP executive team had to balance putting effort into building cloud solutions with maintaining engineering support for the ERP system innovations that its customers relied on.


IT leaders face reality check on hybrid productivity

Organizations are realizing that hybrid work is more about how teams come together — not just what’s right for the organization or individual, says Jonathan Pearce, workforce strategies lead at Deloitte Consulting. So more companies are ratcheting up expectations for their team leaders to decide how work gets done, and then hold them accountable as a team when it comes to performance and rewards. “We’re expecting more team leaders to have open discussions with their teams on what’s working and not working around communication, the norms around [how quickly] they’re expected to respond and how we come together when we need to collaborate,” Pearce says. “The question now becomes how do we up their game as managers — not just managers of work but really orchestrators of a more complex team environment,” Pearce says. Good managers make work more enjoyable for their teams, are better able to identify and use each employee’s strengths and help those workers gain more skills and experience they need to develop their careers and be more productive, he adds.


Improving Cyberresilience in an Age of Continuous Attacks

Effective cybersecurity is about risk management. For example, when banks lend money or issue credit cards, the chief risk officer (CRO) has created a model based on profiles that assume there will be a default rate, meaning certain borrowers may not ever repay their obligations. This is communicated to the chief executive officer (CEO) so that the entire management team understands that it will incur losses from certain customers. Banks are then able to plan and reserve for these losses before they happen. Enterprises must think of cybersecurity in the same manner in which banks lend money. It is only a matter of time before a breach occurs. If the right controls are in place, these breaches are nothing more than a simple incident of 1 machine being compromised vs. an entire network’s worth of data being compromised. Each new attack has the potential to change the threat model. This may not be the first thing on cybersecurity team members’ minds after an attack, but changes could be required immediately. 


The 3G shutdown: Here are the impacted devices. Do you own any?

So, what does this all mean for older hardware like cell phones, alarms, and GPS systems that thrive on the 3G spectrum? To put it bluntly, many of the network-driven features will become obsolete, presenting some unforeseen dangers. Fortunately, there are steps that you and your loved ones can take to safely transition from aging to future-proof tech. In some cases, manufacturers may even be able to give your older gadgets new life through software upgrades. ... Besides ushering in the revolution of smartphones, 3G has played a foundational role in the navigation and alarm-based systems that we rely on during our everyday commutes. With the institution of faster and more reliable 5G, roadside assistance and emergency crash alerts are among the many network-based features that will be affected by the shutting down of 3G. Many cars also have an emergency SOS button that, when pressed, dials first responders via 3G. That, too, will lose functionality. Vehicles from popular automakers like Toyota, Lexus, Nissan, Hyundai, Dodge, and more released before 2019 are susceptible to the issues mentioned above. 


Quantum Computing Will Change Our Lives. But Be Patient, Please

Over and over at Q2B, quantum computing advocates showed themselves to be measured in their predictions and guarded about promising imminent breakthroughs. Comments that quantum computing will be "bigger than fire" are the exception, not the rule. Instead, advocates prefer to point to a reasonable track record of steady progress. Quantum computer makers have gradually increased the scale of quantum computers, improved its software and decreased the qubit-perturbing noise that derails calculations. The race to build a quantum computer is balanced against patience and technology road maps that stretch years into the future. ... And new quantum computing efforts keep cropping up. Cloud computing powerhouse Amazon, which started its Braket service with access to others' quantum computers, is now at work on its own machines too. At Q2B, the Novo Nordisk Foundation -- with funding from its Novo Nordisk pharmaceutical company -- announced a plan to fund a quantum computer for biosciences at the University of Copenhagen's Niels Bohr Institute in Denmark.


The Future: Data Access Must Be Intelligently Automated

Of course, an AI engine must contain certain features, including the ability to provide transparent explanations to data managers regarding processes and the capability to receive data manager feedback for learning and improving the DPP. It must also boost efficiency and accuracy when automating and improving how policies are built, maintained and enforced. Then, over time, these policy applications become more accurate, flexible and intelligently automated. An AI engine also requires vast data sets for training. However, it’s possible to reduce the time required by applying the “human in the loop concept,” where data managers educate the AI. Through this process, the AI engine learns faster and makes better decisions and suggestions. Policies can then be maintained and updated, improving the DPP and supporting organizations to quickly and automatically decide on sharing processes that are safe, secure and compliant. This is the ideal convergence of human expertise and AI technology. And it’s the future of data access governance and lifecycle management. Is your business ready to take advantage?


How much digital trust can you place on zero-trust?

One very important principle of zero-trust that is often understated is assumed breach. All too often, some identity and access management (IAM) product suppliers are quick to share how they can help enterprises achieve zero-trust. This is all well and good, except for the fact that they often cover the first two principles of i) verify explicitly and ii) use least privilege access, but not enough of iii) assume breach. While the first two principles help to limit any attack blast radius and hinder a breach as it steps through the attack kill chain, the third and last principle is critical to effective and efficient detection and containment of a breach in the ability to detect fast, contain fast and recover fast. If we believe that breaches are inevitable, assume breach requires a bigger stage. ... With the increase of triple-extortion ransomware and ransom cartels, it is important to zoom in on decoys. The deployment of time-based database honeytokens shortens incident response time by allowing an enterprise to quickly determine whether the source of a data leak arose from any system breach within the enterprise or was the result of a case of re-hashing of past leaked data from breach databases.


The Great Resignation isn’t over yet

One in four employees don’t feel secure in their current positions and almost half of them plan to explore new job options in 2023, according to a new report that indicates the Great Resignation remains in full swing. Over the past year, more than 4 million workers have quit their jobs every month, according to US Bureau of Labor Statistics The report, by human resource management software provider isolved, says the top way employers can improve company culture and retain their workers is by paying their employees market value. “This comes as no surprise, considering pay transparency laws have jumped to the forefront, and the pressure is on employers to eliminate pay inequality within their organizations,” isolved said in its report. “Data shows employees are more anxious, burnt-out and financial security-driven than ever," ” James Norwood, isolved’s chief strategy officer, said in a statement. "To combat these concerns, HR departments of all sizes must evaluate what they can automate and gain efficiencies in, enhance what they can to improve employee experience, and extend the impact of their team."


The Professionalization of Ransomware: What You Need to Know

Carson says it is critical that IT professionals are current with the ransomware trends and techniques, as it will help IT professionals identify the best ways to reduce those risks and enhance the security controls for the business they are hired to protect. From his perspective, the breakup of some of the large ransomware criminal gangs makes it more likely that smaller splinter groups will become the top threat in 2023. “They have the knowledge of a larger ransomware gang and can now operate more efficiently, sometimes even more targeted,” he says. Kirk explains ransomware is still largely successful due to security mistakes or weaknesses that usually can be mitigated or eliminated. “The risk from stolen login credentials can be mitigated by employing multifactor authentication,” he says. “Cybersecurity awareness training can reduce the likelihood an employee may be tricked into downloading a malicious attachment.” He adds that promptly patching software -- particularly for internet-facing systems such as email servers or VPNs -- is extremely important, as is ensuring that remote connectivity software is securely managed.



Quote for the day:

"Brilliant strategy is the best route to desirable ends with available means." -- Max McKeown

Daily Tech Digest - December 14, 2022

The nature of the CISO role will be in flux in 2023

“Today’s CISOs are taking up the mantle of responsibilities that have traditionally fallen solely to the CIO, which is to act as the primary gateway from the tech department into the wider business and the outside marketplace,” said James Larkin, managing partner at Marlin Hawk. “This widening scope requires CISOs to be adept communicators to the board, the broader business, as well as the marketplace of shareholders and customers. By thriving in the ‘softer’ skillsets of communication, leadership and strategy, CISOs are now setting the new industry standards of today and, I predict, will be progressing into the board directors of tomorrow.” ... “I also feel that over the last eight to 10 years, the CISO role has become a CISO-plus role – CISO plus engineering, CISO plus physical security, CISO plus operational resiliency, or CISO plus product security. As a result, we’ve seen multiple CISOs that have done a great job with cyber security, fusion centres, SOC and leadership. This has paved the way for the CISO office to become a business enabler and also a transformational technology function.”


Addressing Professional Ethical Dilemmas

The problem lies in determining which actions are considered ethical and which are unethical. Consider the driver waiting at the traffic signal. Would it be considered ethical if the person drove through while the signal was still red if they did so in an effort to bring an injured person to the hospital? The same act, which would normally be considered unethical, can be considered ethical under different circumstances. Professional ethics are not so different from this example. Professionals are supposed to engage in ethical behaviors, but they are not immune to ethical dilemmas such as those described. There is a need to understand and determine which actions are ethical and which are unethical, since stakeholders prefer to do business with reputable enterprises that conduct themselves ethically. An ethical professional helps set the standard for others within the organization. Professionals have an opportunity to not only inspire others to do the right thing, but also to consider what kind of people they themselves want to be. There are various ethical dilemmas that a professional may encounter.


Mastering the Mesh: Finding Clarity in the Data Lake

Data mastering–or the process of taking new records and linking them to pre-existing master records that have already been vetted–was one of the important data quality steps that enterprises traditionally did as part of loading their data warehouses. However, master data management (MDM) largely fell by the wayside as the pace of data creation picked up and the “schema upon read” approach of the data lake took hold. Tamr, which sponsored the 451 Research report, is one of the software vendors trying to bring MDM back and make it relevant in the big data world. The company, which was co-founded by Turing Award winner Michael Stonebraker, accepts that relying on humans alone to power MDM isn’t feasible. Neither is a rules-based approach. But backed by the pattern-matching and anomaly-spotting power of machine learning, MDM can provide that critical data quality step that’s needed in today’s big data world without becoming another bottleneck in the process. ... “Enterprise data needs to be cleansed and standardized for the data mesh concept to work at its full potential,” the 451 Research authors write. 


Preparations for Quantum Cyber Threat Get a Senate Boost

The Quantum Computing Cybersecurity Preparedness Act largely echoes a national security memo the administration issued in May laying out deadlines for agencies to inventory all currently deployed cryptographic systems in order to prioritize their transition to forms of encryption experts say would be invulnerable to speedy quantum computers. The National Institute of Standards and Technology and the National Security Agency are currently developing standards for the implementation of four quantum-resistant algorithms NIST announced in July after inviting scientists around the world to submit their proposals. In anticipation of the algorithms, a January national security memo granted NSA the power to issue binding operational directives to facilitate agencies’ migration to the new standards. In addition to reiterating the administration’s instructions for agencies, including the Office of Management and Budget, the legislation directs OMB to report annually to Congress on the migration effort. The reports should outline the administration’s strategy and projected costs, according to the press release.


How to combat counterfeit network gear

The most obvious sign that a device may be counterfeit is its price. "Too good to be true is just that," says Lessin. He also urges purchasers to keep a sharp eye out for small details that counterfeiters often overlook, such as packaging design and quality, as well as documentation language. Most of the legitimate networking vendors offer comprehensive tutorial videos showing how to tell if you're using an authentic product, says Keatron Evans, principal security researcher at security education provider Infosec Institute. "If you can't verify something as authentic, you should count it as potentially counterfeit," he advises. "Trying to do it the other way around, by looking for signs of counterfeiting, is not as effective because of how rapidly things change." Unfortunately, for many victims, a bogus component will reveal its true fake identity only after it has been deployed. "Counterfeits are most commonly identified when the device fails," says Mike Mellor, vice president of cybersecurity consulting at managed security services provider Nuspire.


An Introduction to Accelerator and Parallel Programming

Today, when we talk about a hardware accelerator, we are often talking about a GPU. However, there are myriad different types of accelerators that have arisen to solve various problems—including deep learning and AI—which utilize hardware specifically designed to perform large-scale matrix operations, the heart of DL workloads. In addition, there are hardware-acceleration technologies built into traditional CPUs like Intel® Advanced Vector Extensions (Intel® AVX) and Intel® Advanced Matrix Extensions (Intel® AMX). With the rise of new accelerators, there is always the challenge of how to program for them. Most accelerators currently available are based on parallel execution and, hence, some form of parallel programming. ... Parallel programming is how we write code to express parallelism in any code/algorithm to get it to run on an accelerator or multiple CPUs. But what is parallelism? Parallelism is when parts of a program can run at the same time as another part of the program. Typically, we break this down into two categories: task parallelism and data parallelism.


5 risks of AI and machine learning that modelops remediates

Data scientists are generally not experts in risk management, and in enterprises, a first step should be to partner with risk management leaders and develop a strategy aligned to the modelops life cycle. Wheeler says, “The goal of innovation is to seek better methods for achieving a desired business outcome. For data scientists, that often means creating new data models to drive better decision-making. However, without risk management, that desired business outcome may come at a high cost. When striving to innovate, data scientists must also seek to create reliable and valid data models by understanding and mitigating the risks that lie within the data.” ... When a tree falls in the forest, will anyone take notice? We know the code needs to be maintained to support framework, library, and infrastructure upgrades. When an ML model underperforms, do monitors and trending reports alert data science teams? “Every AI/ML model put into production is guaranteed to degrade over time due to the changing data of dynamic business environments,” says Hillary Ashton


Talent Transformation Strategies for Security Leaders

A cybersecurity workforce with a growth mindset sees challenges as opportunities to grow, learn and become more resilient and adaptable. The hybrid work environment prevalent today needs security employees working toward a common goal that is aligned with broader organizational objectives. It is the responsibility of security leaders to set the tone at the top and communicate frequently and effectively with their teams on the vision and purpose of the organization’s security functions to the broader business and the value that security unlocks for the business to rapidly scale and expand. ... Security leaders should train their managers to lead and manage teams in this new hybrid working model and educate the cybersecurity staff to deal with the impact on security investments, workforce restructuring and work backlog to meet business requirements. Organizations should build a stronger workforce by augmenting their internal capacity with external security vendors and managed security service providers (MSSPs) where required. Managed services can take the form of outsourcing or co-sourcing models, which can be quick and effective ways to overcome these challenges.


Cloud-based fingerprint system for UK police nears completion

Known as the Transforming Forensics (TF) programme, the capability is hosted by the Police Digital Service (PDS), which is aiming to deliver the first full deployment in March 2023. The PDS said that through access to a digital suite of tools – housed on the PDS Xchange platform, which is powered by Amazon Web Services (AWS) – police forensic teams would be able to send fingerprint and crime scene images in real time, allowing them to identify suspects within hours instead of days, as well as improve work processes by taking them off paper and into automated workflows. ... While the UK data protection watchdog will initially consult with the organisation to advise them on how to make their operations compliant, it also reserves the right to issue two tiers of monetary penalties. These include a “standard maximum penalty” of roughly £9m or 2% of the organisation’s annual turnover, or a “higher maximum” of £18m or 4% of annual turnover. In both cases, the offending organisation will be fined whichever amount is higher.


Platform Engineering Needs a Prescriptive Roadmap

Fundamentally the problem is that all of these transformations have a massive people-interaction component, and the bigger and older you are as an organisation, the more difficult it is to change how people interact, and the higher up the chain you have to go to create organisational change. Having spent time at a “webscale” large tech company, a small-to-medium tech company, and then working for the last decade with a lot of very traditional enterprises, it’s striking how poor internal communication is inside most enterprises compared to tech companies. ... Ultimately success requires being very deliberate about architecting productive team-to-team interactions, with as few intermediaries as possible, and to focus on the feedback loops between the producers and consumers of systems. A common mistake I see folks make is to set an open-ended goal of “collaboration” between teams, with endless meetings and working sessions, and it turns out this is extremely inefficient at scale when your consumers outnumber your producers (which they should do in almost every situation!).



Quote for the day:

"Decision-making is a skill. Wisdom is a leadership trait." -- Mark Miller

Daily Tech Digest - December 13, 2022

The Broken Promise Of AI: What Went Wrong Between 2012 And 2022

Data science as a discipline was poorly understood, and most organizations had not yet implemented a data strategy aligned with their business objectives. Therefore, the first wave of data scientists had the time, training and support of the business to experiment and explore possibilities, just as Patil and Davenport had recommended. “Experimentation” is not scoped to any specific strategic priority, however. In practice, data science was science—a pursuit of knowledge. Real-world applications would have to wait. The consequence is that AI as a concept matured but AI in practice faltered. Over time, data science divisions moved further from the business strategy they were supposed to support. Silos emerged between business and technical units. Small successes were celebrated and held up as indicators that the process was working. But scaling them proved difficult. Executives, unsure why the whole process isn’t automated, continue to invest in people and technology to try to narrow the gap. The problem they face isn’t technological, though. It’s cultural. The goal of a company is not to set up a robust data environment; it’s to build, use and sell data products. 


Disconnect between CEOs & testers puts companies at risk of software failure

So, if there is an acknowledgment that testing is important and a fear that failing to test software could lead to job losses, the obvious question is why is software not tested properly? This often comes down to businesses not thinking there is a viable, cost-effective option and choosing speed over stability. However, there are more specifics we can unpack. When asked why their software wasn’t tested properly before being released, CEOs and testers in the same Censuswide survey cited a few primary reasons. The first is a reliance on manual testing, which is time and resource intensive, so therefore often skipped or rushed. This is compounded by the feeling that development cycles need to be quicker to compete in a crowded market. The next most prominent reasons were a lack of skilled developers available to conduct testing, or a lack of investment in training and development to upskill those already on the team. ... There needs to be a transition from manual testing towards automation to meet the testing requirements of increasingly complex software, with businesses struggling to scale their chosen solutions and leverage existing skills across Quality Assurance departments.


Machines and Megaprojects – AI Trajectory 2023+

Disruptive technologies like AI, blockchain or metaverse herald new value and wealth creation possibilities for many investors and technologists. But then there is a much larger subset of humanity, people for whom the ascendance of these new machines lives as an existential threat. Might the housekeeping robot one day get fed up with serving the morning coffee and turn into a killer robot? From one day to the next, the robot’s owners become slaves. We are tongue-in-cheek here, but these are genuine concerns for many people. When it comes to our jobs, careers, and employment, the big questions at the back of our minds are, “Will my job become obsolete? Will I be terminated? Worse, will I be unemployable, a little pawn in a world run by a super-intelligence?” These are the ethical, moral, and practical questions in the background for which solutions have yet to be invented. ... A hidden bias that disproportionately favors one racial or age or gender group over another in crucial decisions such as hiring individuals is one thing. More chillingly, consider the impact AI bias could have in determining whether someone should be prosecuted or sentenced to prison, and perhaps even the length of their sentence.


The future of finance belongs to open source

While crypto-currency pushes blockchain technologies' limits and makes the headlines, financial services companies are known for their conservative approach to software development. That doesn't mean they've been unfriendly to Linux and open source. It's been quite the opposite. ... So it is, said Gabriele Columbro, FINOS' Executive Director, that open-source adoption is continuing "laying out the necessary building blocks for an organic, growing, and sustainable open community in the industry. While we know there is still a lot of work to do to reach full maturity, we're extremely proud of the major role that FINOS played in opening up financial services to the disruptive innovation benefits open source can deliver to this sector." Part of that work is that compared with other sectors, such as IT, science, and telecom, financial service companies lag behind in encouraging open-source contribution. Still, more than half (54%) of respondents say contributing to open source improved the quality of the software they are currently using. In addition, active participation in open source was cited as a key factor in recruiting and retaining IT talent.


Citizens Are Happy To Hand Over Data So Long as Use Is Transparent

The old quip that ‘if you’re not paying for the product, you are the product’ has not discouraged people from joining services such as Facebook, which has seen exponential user growth since its launch. According to Statista, 2.7 billion people use Facebook, a figure that has grown remarkably consistently since the company passed 1 billion users in 2012. However, you only have to look at the uproar that Facebook caused in January when it updated WhatsApp’s terms of service to state that data from private conversations would be used to inform ads on Facebook’s other platforms, to see the value people put on transparency. The change led to a 4,200% increase in user growth for rival app Signal. ... ForgeRock’s research suggests that Singaporeans are not averse to providing access to their data, so long as they are told upfront what it will be used for. The outcry over TraceTogether provides a lesson on the importance of transparency when talking to people about how their data is going to be used. This is only going to become more crucial in the future. 


2023 emerging AI and Machine Learning trends

There is a blurring of boundaries between AI and the Internet of Things. While each technology has merits of its own, only when they are combined can they offer novel possibilities? Smart voice assistants like Alexa and Siri only exist because AI and the Internet of Things have come together. Why, therefore, do these two technologies complement one another so well? ... Moving on from the concept of Artificial Intelligence to Augmented Intelligence, where decisions models are blended artificial and human intelligence, where AI finds, summarizes, and collates information from across the information landscape – for example, company’s internal data sources. ... Composite AI is a new approach that generates deeper insights from any content and data by fusing different AI technologies. Knowledge graphs are much more symbolic, explicitly modeling domain knowledge and, when combined with the statistical approach of ML, create a compelling proposition. Composite AI expands the quality and scope of AI applications and, as a result, is more accurate, faster, transparent ,and understandable, and delivers better results to the user.


Is Your Business Ready for the Programmable World?

Imagine a world where the environment around you is as programmable as software: a world where control, customization, and automation are enmeshed in our surroundings. In this world, people can command their physical environment to meet their own needs, choosing what they see, interact with and experience. Meanwhile, businesses leverage this enhanced programmability to reinvent their operations, subsequently building and delivering new experiences for their customers. ... Leading enterprises will be at the forefront of the programmable world, tackling everything from innovating the next generation of customizable products and services, to architecting the hyper-personalized and hyper-automated experiences that shape our future world. Organizations that ignore this trend, fatigued from the promise of IoT, will struggle as the world automates around them. This will delay building the infrastructure and technology necessary to tap into this rich opportunity, and many organizations may find themselves playing catchup in a world that has already taken the next step.


Cyber security needs a makeover if we are to meet skills demand

While it’s true the profession is suited to logical thinkers, often with a strength in maths, this is by no means the clichĂ© that is so often represented. Perception is incredibly important. Young people making decisions on their future are influenced by so many factors. From more traditional sources such as teachers, careers advisors and family, through to how they perceive a job role or industry from the media they consume. While there has been heavy-handed attempts to subvert stereotypes – just think about the somewhat notorious government-backed advert depicting a ballet dancer who could retrain to work in cyber security – I do believe the overall sentiment was correct. Next year will see the launch of the cyber security occupational specialism that will form part of the Digital T Level. The qualification is aimed at 16 to 19 year olds and is equivalent to three A Levels, with a focus on developing technical and vocational skills through a mix of classroom based learning and an industry placement.


Want to set yourself apart? Own your job

Simplifying complexity is an art form, but such an exercise can easily fall into the trap of oversimplification. And yet, through all my years of asking leaders about the X factors that separate employees, I have wondered what quality actually makes someone stand out and get that promotion. Here’s my vote: an extreme sense of accountability and ownership of the job. People with these qualities figure out how to get something done, even if the path to success is unclear. When things get tough, they don’t point fingers or throw up their hands in frustration or complain that something isn’t fair or is too hard. Ownership is not just about having a strong work ethic—it’s about having a sense of responsibility to follow through and deliver. I saw this quality firsthand in many of the reporters I worked with during my 14 years as an editor at Newsweek magazine and the New York Times. Reporting requires creativity, resourcefulness, and persistence. There were some people who I just knew would get the work done. And when I’ve interviewed business leaders about the qualities that set high performers apart, this theme of responsibility has come up often.


Responsible AI by design: Building a framework of trust

Responsible AI practices have not kept pace with AI adoption for various reasons. Some firms put responsible AI on hold because of legislative uncertainty and complexity, thus delaying value realization on business opportunities. Other challenges include concerns about AI’s potential for unintended consequences, lack of consensus on defining and implementing responsible AI practices, and over-reliance on tools and technology. To overcome these challenges, it’s important to understand that technology alone is insufficient to keep up with the rapidly evolving AI space. Tools, bias detection, privacy protection, and regulatory compliance can lure organizations into a false sense of confidence and security. Overly defined accountability and incentives for responsible AI practices may look good on paper but are often ineffective. Bringing multiple perspectives and a diversity of opinions to technology requires a disciplined, pragmatic approach. To adopt a responsible AI strategy, some key concepts must be kept in mind, starting with setting a strong foundation.



Quote for the day:

"Courage is the ability to execute tasks and assignments without fear or intimidation." -- Jaachynma N.E. Agu