Showing posts with label traceability. Show all posts
Showing posts with label traceability. Show all posts

Daily Tech Digest - May 03, 2023

What You Need to Know About Neuromorphic Computing

Neuromorphic computing is a type of computer engineering that mimics the human brain and nervous system. “It's a hardware and software computing element that combines several specializations, such as biology, mathematics, electronics, and physics,” explains Abhishek Khandelwal, vice president, life sciences, at engineering consulting firm Capgemini Engineering. While current AI technology has become better at outperforming human capabilities in multiple fields, such as Level 4 self-driving vehicles and generative models, it still offers only a crude approximation of human/biological capabilities and is only useful in a handful of fields. ... Neuromorphic supporters believe the technology will lead to more intelligent systems. “Such systems could also learn automatically and self-regulate what to learn and where to learn from,” Natarajan says. Meanwhile, combining neuromorphic technology with neuro-prosthetics, (such as Neuralink) could lead to breakthroughs in prosthetic limb control and various other types of human assistive and augmented technologies.


How the influence of data and the metaverse will revolutionize businesses and industries

Today, business is all about data: collecting, storing, transforming, and analysing it to gain insights—to make decisions. Just like how ChatGPT requires massive amounts of data to create human-like language, businesses need data to augment human decision-making. From machine and building performance to energy and emissions, data is the crucial link between the physical and digital worlds. It’s also the key to solving efficiency and sustainability challenges that are now more urgent than ever. If the metaverse is meant to transform business and industries, it must be built on solid data foundations. ... Digital transformation started with connecting physical assets via IoT and edge controls. Its disruptive potential has proven to carry operational and energy efficiency across all levels of an enterprise. When we introduce powerful software capabilities and start leveraging the generated data, we can create virtual representations of the real world by combining simulation, augmented reality (AR), data sharing, and visualization all at once. 


Distributed Tracing Is Failing. How Can We Save It?

Engineers are to some degree creatures of habit. The engineering organizations I’ve spent time with have a deep level of comfort with dashboards, and statistics show that’s where engineers spend the most time — they provide data in an easy-to-understand graphical user interface (GUI) for engineers to quickly answer questions. However, it’s challenging when trace data is kept in its own silo. To access its value, an engineer must navigate away from their primary investigation to a separate place in the app — or worse, a separate app. Then the engineer must try to recreate whatever context they had when they determined that trace data could supplement the investigation. Over time, all but a few power users start to drift away from using the trace query page on a regular basis. Not because the trace query page is any less useful. It’s simply outside of the average engineer’s scope. It’s like a kitchen appliance with lots of uses when you’re cooking, but because it’s kept out of sight in the back of a drawer, you never think to use it — even if it’s the best tool for the job.


We’re Still in the ‘Wild West’ When it Comes to Data Governance, StreamSets Says

A lack of visibility into data pipelines raises the risk of other data security problems, the company says. “The research reveals that 48% of businesses can’t see when data is being used in multiple systems, and 40% cannot ensure data is being pulled from the best source,” it says. “Moreover, 54% cannot integrate pipelines with a data catalog, and 57% cannot integrate pipelines into a data fabric.” Who holds responsibility for cleaning up the data mess? Well, that’s another area with a bit of murkiness. About half (47%) of StreamSets survey respondents say the centralized IT team bears responsibility for managing the data. However, 18% said the line of business holds primary responsibility, while it’s split between the business and IT in 35% of cases. A second survey released by StreamSets last week highlights the difficulty in running data pipelines in the modern enterprise. Many companies have thousands of data pipelines in use and are hard pressed to build, manage, and maintain them at the pace required by the business, according to StreamSets.


Quantum computing: What are the data storage challenges?

One of the core challenges of quantum computers is that their storage systems are unsuitable for long-term storage due to quantum decoherence, the effect of which can build up over time. Decoherence occurs when quantum computing data is brought into existing data storage frameworks and causes qubits to lose their quantum status, resulting in corrupted data and data loss. “Quantum mechanical bits can’t be stored for long times as they tend to decay and collapse after a while,” says Weides. “Depending on the technology used, they can collapse within seconds, but the best ones are in a minute. You don’t really achieve 10 years of storage. ...” Quantum computers will need data storage during computation, but that needs to be a quantum memory for storing super-positioned or entangled states, and storage durations are going to present a challenge. So, it’s likely data storage for quantum computing will need to rely on conventional storage, such as in high-performance computing (HPC). Considering the massive financial investment required for quantum computing, to introduce a limitation of “cheap” data storage elements as a cost-saving exercise would be counter-productive.


7 speed bumps on the road to AI

There are many issues and debates that humans know to avoid in certain contexts, such as holiday dinners or the workplace. AIs, though, need to be taught how to handle such issues in every context. Some large language models are programmed to deflect loaded questions or just refuse to answer them, but some users simply won't let a sleeping dog lie. When such a user notices the AI dodging a tricky question, such as one that invokes racial or gender bias, they'll immediately look for ways to get under those guardrails. Bias in data and insufficient data are issues that can be corrected for over time, but in the meantime, the potential for mischief and misuse is huge. And, while getting AI to churn out hate speech is bad enough, the plot thickens considerably when we start using AI to explore the moral implications of real life decisions. Many AI projects depend on human feedback to guide their learning. Often, a project of scale needs a high volume of people to build the training set and adjust the model’s behavior as it grows. For many projects, the needed volume is only economically feasible if trainers are paid low wages in poor countries. 


7 ways to improve employee experience and workplace culture

The traditional hierarchical way of managing employees has been shown to be largely ineffective. Companies run as adhocracies are more productive as they foster knowledge sharing, workplace collaboration, and rapid adaptation—some of the most important attributes for companies in the knowledge-based age. By encouraging employees to be more self-sufficient and less dependent on their superiors, you can promote greater efficiency and effectiveness in the workplace. Start adopting more self-service options for employees. Modern IT and HR systems can be calibrated to your employees’ needs and enable them to help themselves, whether they want to book a vacation, access important documents, get a better screen, or access an enterprise app. Although hybrid and remote work seems to be the preferred model for many organizations, it still has disadvantages. Many remote and hybrid employees struggle to manage the blurred boundary between work and personal life, or the often less-than-ideal workplace setups.


What Does a Strong Agile Culture Look Like?

A strong culture is critical for Agile organizations to be successful. Agile requires organizations, and therefore its employees, to be ready to welcome changing requirements and inspect and adapt at any given moment. Teams are supposed to be self-managing and self-organizing. Stakeholders need to see working products frequently. Breaking that down, expectations are that projects change all the time but still need to be delivered in quick increments to stakeholders, all the while teams are managing themselves. ... Psychological safety in the workplace refers to the extent to which employees feel safe to speak up, share their ideas, and take risks without fear of negative consequences. It is the belief that one will not be punished or humiliated for speaking up with ideas, questions, concerns, or mistakes. When there is psychological safety in the workplace, employees are more likely to be engaged, motivated, and productive. They are also more likely to collaborate, share their knowledge and expertise, and contribute to innovation.


9 ways to avoid falling prey to AI washing

It’s not uncommon for a company to acquire dubious AI solutions, and in such situations, the CIO may not necessarily be at fault. It could be “a symptom of poor company leadership,” says Welch. “The business falls for marketing hype and overrules the IT team, which is left to pick up the pieces.” To prevent moments like these, organizations need to foster a collaborative culture in which the opinion of tech professionals is valued and their arguments are listed thoroughly. At the same time, CIOs and tech teams should build their reputation within the company so their opinion is more easily incorporated into decision-making processes. To achieve that, they should demonstrate expertise, professionalism, and soft skills. “I don’t feel there’s a problem with detecting AI washing for the CIO,” says Max Kovtun, chief innovation officer at Sigma Software Group. “The bigger problem might be the push from business stakeholders or entrepreneurs to use AI in any form because they want to look innovative and cutting edge. So the right question would be how not to become an AI washer under the pressure of entrepreneurship.”


Skilling up the security team for the AI-dominated era

The increasing reliance of AI and machine learning models in all technological walks of life is expected to rapidly change the complexion of the threat landscape. Meanwhile, organically training security staff, bringing in AI experts who can be trained to aid in security activities, and evangelizing the hardening of AI systems will all take considerable runway. Experts share what security leaders will need to shape their skill base and prepare to face both sides of growing AI risk: risk to AI systems and risks from AI-based attacks. There is some degree of crossover in each domain. For example, machine learning and data science skills are going to be increasingly relevant on both sides. In both cases existing security skills in penetration testing, threat modeling, threat hunting, security engineering, and security awareness training will be as important as ever, just in the context of new threats. However, the techniques needed to defend against AI and to protect AI from attack also have their own unique nuances, which will in turn influence the make-up of the teams called to execute on those strategies.



Quote for the day:

"Remember teamwork begins by building trust. And the only way to do that is to overcome our need for invulnerability." -- Patrick Lencioni

Daily Tech Digest - June 30, 2020

3 Reasons Why Traceability Should Be a DevOps Priority

One thing that has to be central to your strategy is traceability. You may have come across the term a few times before. It’s commonly used elsewhere in the business world, especially with regard to supply chains. Basically, what it means is keeping track of a commodity or product at every stage of the production process. Records of the product’s entire manufacturing and distribution history are kept so that the sources of any problems can later be determined and dealt with. Traceability thereby ensures that suppliers can act quickly and decisively in the event of a product recall, for example. Another advantage of traceability is that it provides additional transparency, which helps to maintain consumer confidence. As consumers are becoming increasingly aware of how products are sourced and manufactured, this is now an important consideration. It reassures consumers that manufacturers and suppliers are aware of their concerns and that they’re looking out for their best interests. You can see already, then, how much of this also applies to mobile DevOps. Traceability in DevOps is about ensuring clarity, accountability and the best possible end product for the consumer. 


After this COVID winter comes an AI spring

Companies emerging from this recession will adapt processes to “vaccinate” their systems against the next pandemic. In response to supply-chain disruptions, Volkswagen is considering increasing its 3D printing capabilities in Germany, which would give the automaker a redundant parts source. The government-run Development Bank of Japan will subsidize the costs of companies that move production back to Japan. Bringing production back onshore while controlling costs will require significant investment in robotics and AI. Even companies that don’t have their own production capacity, such as online retailers, plan to use AI to improve the reliability of complex global supply chains. So a surge in demand for AI talent is inevitable. ... One relatively new risk that managers must tolerate pertains to data. Even companies that are not yet exploiting their data effectively now recognize it as a valuable resource. As startups deploy AI software systems that prove more accurate and cost-effective than human beings, their early-adopter customers must be more willing to trust them with proprietary data. That will allow AI companies to train new products and make them even smarter.


Tackling Fragmentation in Serverless Data Pipelines

Within the AWS ecosystem, a number of services stitched together provide this experience. And on the analytics team at Equinox Media on which I sit, we’ve embraced this architectural pattern to it’s fullest — foregoing self-maintained, provisioned servers to handle data processing — and opting instead for a parade of SQS queues, SNS topics, Kinesis streams, and of course, Lambda functions. As a result, diagrams of our data pipelines bear a visual resemblance to a 6th grader’s Rube Goldberg project. And as the metaphor suggests, this paradigm presents new organizational challenges to keep maintenance costs low. When adopting the serverless platform, one thing you’ll quickly notice is a proliferation in the number of code repositories your team is maintaining. This is the result of the a common development pattern that calls for a 1:1 ratio of Lambda functions to repos. And while there are benefits to having your business logic fragmented into digestible, bite-sized chunks of code; there are a number of supporting services that are best not replicated and distributed among them. 


Create Symbiotic Relationships with AI in Business

When humans have specific types of problems, we’ve built and trained machines to solve those problems. Examples include machine learning or ML. The ML algorithms that can identify cancer in brain images. The algorithms can also determine the best placements or designs for online ads, and there are deep learning systems that can predict customer churn in business. At the moment, we can only imagine how much more productive we will become as we form symbiotic relationships with AI. Routine tasks that currently take hours or days could be abbreviated to 10 or 15 minutes with the aid of a digital partner. From simple exercises like finding a new restaurant to more expert tasks such as cancer detection, we will increasingly rely on machines for everyday tasks. Dependence on machines might begin as a “second pair of eyes” or “ a second opinion,” but our commitment to machines (and AI) will evolve into full-on digital collaborators. ... Machine learning could bring about a revolution in how we solve problems to which the principle of “optimal stopping” applies. 


Battling Cybercriminals on the ‘Digital Frontline’

People have a degree of protection when they are sitting amongst their colleagues. When suspicious emails come in, it is far easier to speak to a colleague and verify its authenticity. However, as people are now working from home, and they are isolated and often alone, that becomes much harder. Where web and email has been the traditional vector for these kinds of attacks, we are now seeing phishing attempts across multiple platforms, including social media and SMS. Every nation is being targeted and phishing emails appear in almost every language. In many ways, this is the largest set of cyber campaigns we have ever seen. Many of these emails offer falsified information or promises of help related to the pandemic. In one campaign found by Proofpoint, they even promise cures – which is something that malicious actors know the public are interested in and are likely to immediately pay attention to. These attackers are after personal information from anyone and everyone such as login credentials, name, date of birth and government ID details, or want to trick victims into installing malware on systems. A mixture of old, reskinned and relatively new malware is being used to attack users. We are looking at a cybercrime gold rush.


Where Tech Meets Community – Harnessing Tech For Good

Indeed, it is when talent, technology and collaboration come together, that incredible advances can be achieved and at scale. This is exemplified in the solidarity of the technology sector to make a difference, bringing people closer across work, learning and entertainment despite lockdowns, and combating the virus through telemedicine and AI-assisted diagnosis, alongside helping to accelerate the research and drug development innovation curve. A notable example is the rapid establishment of the HPC Consortium involving 11 tech firms assisting federal government, industry and academic leaders across the world with access to expertise and high performance computing capacity. With a mobilization such as this, it is no surprise that by early April 2020, 50 potential vaccines and nearly 100 possible treatment drugs were in development. A feat that would have been unimaginable just a few weeks ago and emergency initiatives and innovations like this can also lay the ground for long term change, from business and education, to healthcare and government.


Prepare for the rise of the IT automation architect

IT automation architects are typically found in DevOps organizations. It's fruitless to focus on a comprehensive automation strategy without a cooperative, integrated DevOps structure already in place. Because of the specialized nature of the job, architects are typically found in larger enterprises or those, like many cloud-native startups, that have mature DevOps practices. There's a wide variety of job titles and associated skills found under the DevOps umbrella. For example, a recent DevOps skills report from the DevOps Institute, a learning association for DevOps professionals, identified more than a dozen DevOps job titles for which organizations are hiring. "DevOps engineer/manager" was the most common title, cited by 51% of survey respondents -- who were comprised of IT professionals, DevOps practitioners, HR managers and consultants. "Automation architect" was the 9th most cited job title at 15%. The following chart summarizes other notable job titles and their response rates. When the same group of survey respondents was asked to rate the importance of various skills to DevOps work, proficiency at automation ranked at the top, with 66% citing it as very important and only 1% listing it as optional or unimportant.


How the COVID-19 Pandemic Will Propel Humanity 20 Years Ahead in Tech

Once executives around the world realize that their employees can not only work in online-first environments but are thriving and being even more productive, with greater opportunities for collaboration with their peers, they will embrace this “new” way of doing business. That, in turn, will unlock many benefits of scale and productivity that were unimaginable in the previous decades. The key driver of change will be that, now, every vendor or business partner can be assumed an online-first operator, and dozens and hundreds of legacy barriers will disappear practically overnight. Essentially, every business on the planet not only can, but will run like a Silicon Valley startup. Imagine, instead of attending five conferences a year, we can attend and collaborate at 50 virtual conferences while being more efficient with our time, given the removal of all that unnecessary travel. Imagine, if instead of a few business development conversations in a given quarter, we are able to do one hundred, now that the vast majority of our peers are in the same Slack or Telegram groups. Imagine that instead of a few dozen local restaurants, we will now have the choice to order from thousands.


Massive complexity endangers enterprise endpoint environments

In addition to heightening risk exposure, the failure of critical endpoint controls to deliver their maximum intended value is also resulting in security investments and, ultimately, wasted endpoint security spend. According to Gartner, “Boards and senior executives are asking the wrong questions about cybersecurity, leading to poor investment decisions. It is well-known to most executives that cybersecurity is falling short. There is a consistent drumbeat directed at CIOs and CISOs to address the limitations, and this has driven a number of behaviors and investments that will also fall short.” “What has become clear with the insights uncovered in this year’s report is that simply increasing security spend annually is not guaranteed to make us more secure,” said Christy Wyatt, President and CEO of Absolute. “It is time for enterprises to increase the rigor around measuring the effectiveness of the investments they’ve made. By incorporating resilience as a key metric for endpoint health, and ensuring they have the ability to view and measure Endpoint Resilience, enterprise leaders can maximize their return on security investments.”


Q&A on the Book Becoming an Effective Software Engineering Manager

It's all about getting oriented and understanding the team, the work they're doing, and the company. I typically use a process which can be followed when landing somewhere new. It involves creating a snapshot of the situation in which you can begin to work with your team. This snapshot is formed of three things: your own observations, your manager's observations, and your team's observations. Your observations are what you see as you settle in and collect information from your team and your manager. We outline a number of techniques for new managers to ask questions to discover what's really going on inside the team, what they're working on, and where there may be ambiguities or frictions. These involve informal conversations, booking in weekly one-to-one meetings, and diving deeper into what they're building and why. Then, as well as doing this downward, we also do this upward by having the new manager ask their manager about the same things. Do they think differently than what the team reports? Why? Are they prioritizing well? If not, why not?



Quote for the day:

"There's a fine line between stubbornness and the positive side of that, which is dogged determination." -- @JebBush