Daily Tech Digest - August 19, 2023

Inside the Rise of 'Dark' AI Tools - Scary, But Effective?

This shouldn't be surprising, since building LLMs is an intensive endeavor. "As what WormGPT showed, even with a dedicated team of people, it would take months to develop just one customized language model," Sancho and Ciancaglini said in the report. Once a product launched, service providers would need to fund not just ongoing refinements but also the cloud computing power required to support users' queries. Another challenge for would-be malicious chatbot developers is that widely available legitimate tools can already be put to illicit use. Underground forums abound with posts from users detailing fresh "jailbreaks" for the likes of ChatGPT, designed to evade providers' restrictions, which are designed to prevent the tool from responding to queries about unethical or illegal topics. In his WormGPT signoff earlier this month, Last made the same point, noting that his service was "nothing more than an unrestricted ChatGPT," and that "anyone on the internet can employ a well-known jailbreak technique and achieve the same, if not better, results by using jailbroken versions of ChatGPT."


4 ways simulation training alleviates team burnout

Simulation training boosts confidence because unlike traditional training methods, the learner gains experience over time through true-to-life virtual cyber warfare training and sparring against simulated malicious adversaries that behave like human opponents. By training in the same IT infrastructure they have at their job— complete with networks, servers, and security tools—they improve competencies, judgment skills, and gain “muscle memory” so they feel prepared to respond to a real cyber incident. ... With simulation training, SOC teams learn to identify false positives and high-priority alerts more effectively over time as they become familiar with the types of alerts that end up impacting their organization’s infrastructure. The training can mimic the high volume of alerts they receive during the day and help teams develop effective triage strategies to streamline their response processes. Practicing this in simulation allows teams to experiment on their approach and fine-tune it without fear of making a mistake during operating hours.


A managerial mantra in the age of artificial intelligence

The rise of modern management brought forth professionalism through business schools, advocating ethical standards and fostering professional workplaces globally. Often, this professionalism is rooted in the mastery of managerial principles. These principles are created and taught by a variety of business school professors, and they are developed in close collaboration with executives and leaders. Unfortunately, a lot of these ideas have only been applied sparingly due to practical limitations. These limitations may result from the limited time available for decision-making in the corporate world, the need to manage uncertainties, the lack of data and accurate knowledge of the facts, and occasionally even the ignorance of professional principles. ... Organisational thinkers have traditionally identified that this leads to satisfaction, whereby managers have to be satisfied with the good-enough, not necessarily the best, choice. In other words, constraints on time availability lead a manager to do a limited analysis of the impact of a job candidate on future organisational performance. 


Five Challenges in Implementing AI in Automation

Accuracy and bias are two critical, yet recurring issues in AI that require human supervision. For example, generative AI applications are prone to hallucination, or making up facts based on their training dataset. In the same vein, biased datasets fed into a machine learning model can produce biased results. If a financial services firm is using an AI-driven automated system to accept or reject credit applications, for example, it’s essential to avoid well-documented, systemic biases toward women or people of color that may be contained in the training dataset. As we progress toward AI-driven decision-making, it’s critical for humans to remain in the loop, verifying the results generated by machine learning algorithms to check bias and other forms of inaccuracy. Keeping humans in the loop is a critical step toward re-training algorithms to perform more effectively in a production environment. ... Regulating AI is an ongoing issue globally, and the legal field continues to be shaped by emerging technologies including generative AI. 


Mastering Agile Security: Safeguarding Your Projects in a Fast-Paced World

Just ensuring rapid delivery of the product is not enough. The key to Agile success is to ensure that security is an integral part of the process from the beginning. And since agile is an iterative process, and is all about accommodating changing requirements as and when they arise, security must also be part of this iterative process. Regular security reviews and tests whenever there is a change in the product is the key to delivering a working as well as secure product. ... Agile security is not an impediment to the Agile process; rather, it's an essential component that ensures the final product is robust, resilient, and safeguarded against potential threats. It's not about slowing down development but about integrating security seamlessly into every phase of the project lifecycle. ... At the core of Agile security is the Agile mindset. This mindset emphasizes collaboration, adaptability, and constant improvement. Security is not a one-time event but an ongoing effort that requires the entire team's commitment.


Managing Software Development Team Dynamics from Within

In most cases, the whole team will benefit from trying new tools or services every now and then, just to understand patterns and trends. We know we should always be increasing automation. However, especially with things like JavaScript frameworks, up jumps the New Pusher — too keen to adopt the new when no evidence exists that the gains are worth the disruption cost. Or worse, ignoring the disruption cost entirely. The New Pusher can make the team pine for the road not taken, as opposed to do what they should do, and investigate a little on their own time to see how the team will truly benefit from their shiny find. When thinking about adopting a new tool or service the team should not trial it somewhere inconsequential, as that will be neither conclusive nor beneficial. A short examination or study period should lead to a yes/no decision and the use of the tool or platform somewhere of value. Once the pattern is set, the New Pusher can work to that template. The suspicion that people just want to put new experiences on their CV is a little irrelevant. 


How Generative AI Is Making Data Catalogs Smarter

Sequeda explained how generative AI, which leverages conversational, chat-oriented interfaces to surface results from large language models (LLMs), improves productivity and encourages the adoption of a data catalog. With more traditional data catalogs, administrative tasks require more significant manual interventions, time, and some advanced skills and analysis. Smart catalogs remove these barriers by simplifying and automating some of the administrative workflows. As a result, team members in an organization see faster time to value and find it easier to get started with the catalogs. On the data producers’ end, Sequeda said, “Generative AI automatically enriches metadata around the inputs and provides descriptions and synonyms” in the data catalog, smoothing catalog record creation and upkeep. Also, smart data catalogs give data engineers “code summaries” about catalog queries, reducing the time to do DataOps, including any pipeline malfunctions. Using smart data catalogs, consumers find inspiration when the generative AI suggests alternative queries from previous searches and patterns of results. 


Four Myths About Digital Transformation And How To Debunk Them By Modernizing At The Data Layer

A data fabric architecture is essentially a data mesh with an added “abstraction layer” that virtualizes all data into a centralized platform. The benefit is a single pane of glass for all data, virtualized and contextualized for a broader range of business users to work with. The trade-off is that this sudden visibility can be daunting for DX teams newly tasked with untangling all the previously unseen dependencies, vulnerabilities, governance issues, and compliance or security gaps that suddenly appear. All three approaches remain represented in today’s marketplace for organizations to choose from. And while the calculus for making the choice will vary for each company based on their DX goals and level of technical expertise, a common ingredient to success is to prioritize scalable and repeatable processes through automation and low-code wherever possible. ... Choosing the right underlying data architecture is an ongoing balance of matching the pros and cons of the approach to the specific business and operational needs of the organizations. 


A license to trust: Can you rely on 'open source' companies?

Amanda Brock, OpenUK's CEO, which doesn't have a horse in the IaaS race, appeared disappointed with the company's move. "HashiCorp has always been a true open source company, and what Mitchell Hashimoto and Armon Dadgar achieved from a project never intended to be commercialized has been incredible." Brock then asks, "Taking it to an IPO and seeing Mitchell have the apparent wisdom to step aside and allow a more experienced individual to run HashiCorp – but has that also led to its downfall as an open source company?" Her answer is yes. "The statements about BSL are sadly open-washing. It would be wrong to suggest these two ever intended a bait and switch, but they have indeed switched away from open source. The pressure of enabling their competitors with their innovations – an inevitability of open source – did not align with the need to generate shareholder value." That led her to another, bigger question: How much money is enough? Is a lot of money with others generating a lot of money, too, a reason to stop?" She's left "wondering whether had Mitchell remained CEO, this would have occurred?"


Culture Transformation: What leaders need to know

Fortunately, culture only appears enigmatic: There are practical, tangible, measurable ways leaders can properly manage their culture. And it all starts with alignment. Executives need to be on the same page with their leadership teams -- particularly CHROs -- about where their culture stands today and where it’s headed in the future. You might be thinking: “We’re already aligned about our culture.” But it’s not enough to be generally on the same page. The best leaders are synchronised on specific, seemingly small details about their culture and how they affect performance. In one of our client organisations, the goal of being a high-performance culture is behind all decisions. Every leadership meeting keeps high-performance front and centre in their conversation. For instance, leaders might be on the same page about the core values and beliefs -- such as customer-centricity or excellence in safety outcomes -- that they want their culture to embody. But the best path to excellence varies tremendously by industry, market segment, product and more. 



Quote for the day:

"Success is not a random act. It arises out of a predictable and powerful set of circumstances and opportunities." -- Malcolm Gladwell

Daily Tech Digest - August 18, 2023

Though simpler, India’s data privacy law is stricter than GDPR in some ways

If you think this is all a tough ask, you should know that the law is simpler and less prescriptive than data privacy laws in many countries. This kind of simpler law is appropriate for a country like India for two reasons – one, because India is just starting down the road of data privacy compliance and two – because India has a huge SME sector that would struggle to comply with a more complex law. At the same time, the law is stricter than GDPR in some ways; for example, in the EU, a business that can develop a case for it having a “legitimate interest” to process personal data can do so without consent. This is largely not possible in India. Further, in the EU, a data breach needs to be reported only to the regulator and individuals only where the data fiduciary concludes that the breach could result in a risk to the rights and freedoms of the individual. The government has given itself the power to exempt classes of data fiduciaries from provisions of the law. This includes start-ups, which have been specifically mentioned. 


Exploring Differences Between Diversity and Inclusion

At an organizational level, both diversity and equity can be addressed through recruitment processes, but inclusivity is the most challenging and up to the company as a whole, including all employees. One of the ways to encourage employees to adopt inclusive behavior is through the power of education. When people understand why change is important, they are often more inclined to respond. The word “inclusive” is not a new concept—however, sometimes it is referred to with little substance. Workplaces say they are inclusive because they have a diverse representation of employees, but when you ask the minority groups in that organization if they feel heard, the answer is often conflicting. Rather than playing the game, they feel as though they are mascots or warming the bench. When employees realize inclusion means making sure minority groups feel like they belong, it allows them to assess and challenge their own personal bias, which may be preventing them from fully embracing all perspectives.


Breached for years: How long-term cyber attacks are able to linger

The first step for any cyber criminal looking to pull off a years’ long hack is find a way into a target’s network. Even when organizations make it difficult, there’s usually one entry point. Whether by using initial access brokers (IABs), exploiting vulnerabilities, or using employee credentials – the most effective of the three – they need to get in without tripping any alarms. During the early days of a breach, hackers will do very little other than observe a business and how its people work. They’ll learn all the different processes that staff execute during a typical workday and use that knowledge to mask their movements around the network. There will be no intrusive actions (data exfiltration, vulnerability exploits, lateral movements) until they know how to blend in with everyday traffic being triaged by the organization’s security operations center (SOC) analyst. Attackers usually indulge in one of two methods to remain undetected for extended periods of time. The first is when they use genuine compromised credentials and mimic that employee’s usual behavior 


Tech leaders weigh in on the upside and flipside of generative AI

So if projects are already getting off the ground, what are feelings about where generative AI works best, and how? “The best practises are undoubtedly cross-functional collaboration, ‘try before you buy,’ and learn from what you do,” says Marc O’Brien, CIO at radiology healthcare service provider Medica Group. “In my experience, the algorithms from reputable firms do what they say on the tin but what really matters is where you position in the workflow.” Team Teach’s Ivell believes companies can gain a fast start by using tools being built into applications and suites. “One of the key and immediate opportunities of generative AI is it’s already being built into some tools we already use, be that Power BI, Adobe or more industry-specific apps,” he says. “To take advantage of these needs some internal discovery or analysis of these new functions, understanding how we’d use them, and, in the first instance, training our staff how to exploit the new features. People tend to use tools in the way they always have, and adoption of new features can be slow, so we need to accelerate that.”


6 best practices to defend against corporate account takeover attacks

It’s important to have strong multifactor authentication around all corporate accounts, says Bryan Willett, CISO at Lexmark. "What we’re finding with some of the latest phishing services that are out there, such as EvilProxy, is that they’re getting very good at imitating a login screen that looks just like your corporate login screen and your corporate MFA challenge," Willett says. "And the user has the potential of falling victim to that and sharing their MFA." ... Organizations should also implement contextual access management that considers a user’s current location, the device being used, time of access, network environment, behavior patterns, and other contextual information, according to Halstead. "By doing so, the risk of unauthorized access, often exploited in corporate account takeovers, can be significantly minimized," he says. ... Employee education and awareness are critical, says Halstead. This "human firewall" remains a very important defense in preventing corporate account takeovers.


Make Data Security Training Fun and Engaging with These Tips

What are your employees most interested in? What’s most likely to capture their attention? If you don’t know, ask. Gather insights from employees to identify their current concerns and interests and integrate those into the content. Consider how you could leverage their personal interests in your storytelling approach. For instance, if you have a large base of avid football fans, how might a Super Bowl-themed story or challenge related to data security help capture their interest? Ensure accuracy while entertaining: learning outcomes need to take center stage in your communication efforts, of course. Strive to provide accurate information about cybersecurity and employees’ roles in helping to protect systems and data, while integrating some fun into the delivery of the content. ... Good stories have a protagonist (in this case, the employees), an antagonist (cybercriminals), and some tension that leads to a climax in the plotline. Use these elements to create content that entertains while also illustrating the tangible outcomes and repercussions of poor data security practices, like the potential damage to personal and professional relationships.


Robotic Process Automation: Is Your Job at Risk?

One thing is certain: Change is inevitable. Hairston points to The World Economic Forum’s The Future of Jobs Report 2023, which estimates that 44% of workers’ skills will be disrupted over the next five years. The current pace of technology evolution is transforming jobs faster than ever. IT can either be a key facilitator of the change or a recipient of the change. “In the former case, IT can push the business toward RPA and other automation technologies that are designed to be used by business,” he says. “This will help companies achieve their most strategic objectives and view IT as more of a partner.” As RPA and AI gain stronger footholds, the only way forward is to help displaced team members reskill and upskill, Zhao says. “Fortunately, many online training platforms are available at affordable costs.” To expedite learning, he advises organizations to develop curated content that employees can freely access. Zhao notes that such content should be relevant to both the work being phased out as well as to the tasks that team members will need moving forward. Executive-level sponsorship of any intelligent automation strategy is essential for long-term success.

We think that as general-purpose robots are becoming more common—and they are—people could misuse them. You can find videos online showing how easy it is to attach a weapon to a mobile robot. So, there’s a reasonable concern about who will have access to robots and what they can do with that access. We want to make sure that there will be some regulation around this—and lead the charge in getting it put into place. Policymakers need to get engaged and be informed about the capabilities of the robots, as well as the potential dangers. We are being vocal about our anti-weaponization stance: robots should not cause harm, nor should they impinge upon anybody’s privacy. The industry that we’re hoping to build only exists if people trust robots. If they’re afraid of them, then that’s going to be a problem. ... By managing the final assembly ourselves, we have better control over the quality and cost, and it helps us to rapidly iterate. One of the things we have learned is that when you iterate your design and work with a partner to do the assembly, the communication challenges are pretty thick. 


The Architect’s Guide to Thinking about Hybrid/Multicloud

While most people will tell you complexity is the hardest thing to manage in a multicloud environment, the truth is that consistency is the primary challenge. Having software that can run across clouds (public, private, edge) provides the consistency to manage complexity. Take object storage. If you have a single object store that can run on AWS, GCP, Azure, IBM, Equinix or your private cloud, your architecture becomes materially simpler. Consistent storage and its features (replication, encryption, etc.) enable the enterprise to focus on the application layer. Consistency creates optionality, and optionality creates leverage. Reducing complexity can’t come at some unsustainable cost. By selecting software that runs across clouds (public, private, edge) you reduce complexity and you increase optionality. If it’s cheaper to run that workload on GCP, move it there. If it’s cheaper to run that database on AWS, run it there. If it’s cheaper to store your data on premises and use external tables, do that.


Observability – everything you need to know

The first thing you need to do on any of these platforms is to get your data into it. Historically, for log analytics solutions, like Splunk, that was relatively easy. I don’t mean to trivialise this, but you would grab logs from all of your infrastructure and send those back to Splunk and we process those. You would usually deploy an agent to do it. For observability solutions – not just ours, but any of them – you need more data. In addition to those logs that you capture from each host, you also need system metrics and application metrics and profiles and distributed traces and everything else. There are additional layers of complexity here. Now you’re not just capturing human readable logs from operating systems, you’re capturing all these other types of data from the individual applications that people have written. That requires hooks into all of the hundreds of thousands of libraries that software developers use. I think that historically has held back this industry to a fairly large degree. We rely on a project that I co-founded with a number of other people, and a number of other companies.



Quote for the day:

"I believe it is important for people to create a healthy mental environment in which to accomplish daily tasks." -- Darren L. Johnson

Daily Tech Digest - August 17, 2023

What would an OT cyberattack really cost your organization?

Attacks on industrial control systems (ICS) may not be just about ransomware or accessing information but about deliberately making machines misbehave. Attackers can exploit vulnerabilities to make machines overheat, or robotic arms swing unpredictably. A failed attack on a water utility in Florida attempted to raise the amount of lye in the drinking water; success might have killed thousands. ... When operations in your factory, plant, or substation shut down, revenue will cease. So, an important question not just for the CISO, but for Operations, Finance, and other chiefs is how long you can go without the expected revenue that you may never see?  ... There will be significant damage to an organization's public reputation as news of an attack gets out. The customer trust that took years to build may be gone in an instant, and customers forced to find another supplier while you're shut down may not come back. After all, your shutdown not only inflicted damage to companies further down the chain, it may also have created an impression that you were careless in letting it happen.


The Risk of Quantifying Cyberrisk

Legal concerns could stem from the nature of risk quantification. This process is designed to uncover problems with an actionable amount of detail. Anything that is discoverable in a legal proceeding can find its way into a court case and embarrassing fallout may ensue. The fear is that the very detailed CRQ risk assessment results will be made public. For many organizations that have not adopted CRQ, such results may include lists of broken or missing controls and audit results, all with corresponding verbal risk labels (e.g., high, medium, low). They could (and really should) also include a list of scenarios with the same risk labels attached to them. These results alone could be damning to some organizations. Specific CRQ concerns stem from having all of these elements tied to a potential amount of loss and frequency. However, it is difficult to imagine a court proceeding where strictly qualitative results would allow an organization to walk free.


The CISO Report – The Culture Club

The report highlighted a number of key challenges facing organizations in the EMEA, which are clearly now being discussed in the C-Suite. These challenges include the level of regulatory compliance that organisations now face, especially those operating in these regions. In my opinion, the General Data Protection Regulation (GDPR) is still a massively misunderstood piece of legislation that organisations need help with, yet, the C-Suite recognises the importance of it. Added to this is the ongoing threat cybercrime, as organisations large and small are facing an increasing number of cyberattacks, including ransomware attacks, data breaches, and Distributed Denial of Service (DDoS) attacks. ... To embed cybersecurity and data protection within an organisation, you do not look to build a security culture, but rather, you look to build a culture that respects the importance of Security. This is a simple, yet profound distinction. Every organization possesses a culture, which might either emerge naturally or be intentionally and meticulously developed. Regardless of its origins, the influence of this culture on an organization remains undeniable.


AI for Data Management: An Old Idea with New Potential

No matter how you choose to leverage AI in the data management space — whether you're using AI for more basic needs or you're taking advantage of next-generation AI technologies — your goal should be to identify ways that AI can accelerate workflows and reduce toil for data engineers. Much of the work that data engineers perform on a daily basis can be tedious and time-consuming. Converting data from one format to another by hand could take enormous amounts of time and is a boring task, to put it mildly. So is sifting through vast volumes of information to find data quality issues like redundant or empty cells. Even if you leverage tools to help search and sort data automatically, you're still likely to find yourself investing an inordinate amount of time on data quality if you have to write complex queries by hand to detect quality problems. But if you can substitute AI-based workflows for these tasks, you save yourself a lot of time and labor. 


Low-code and no-code: Meant for citizen developers, but embraced by IT

Low and no-code continue to gain popularity because organizations "are realizing that these tools are not just for early-stage or beginner citizen developers but also for sophisticated, senior developers to save them valuable time and effort," says Pulijala. "Low-code/no-code helps, whether it's addressing talent shortages or freeing up other developers' time. With low-code/no-code solutions, a junior product manager can build a basic prototype, freeing up more senior engineers to focus on customized, higher code solutions. In addition to mitigating talent shortages, low-code/no-code tools improve business agility and contribute to cost savings since it significantly reduces hiring costs and application maintenance costs." ... "While no-code solutions are built from the point-of-view of a non-developer user, they will at times still require professional IT intervention. Enterprise applications can be complex and outages can happen, requiring IT to step in to triage and get things running again."


Multiple Flaws Uncovered in Data Center Systems

Data center equipment and infrastructure solutions provider CyberPower's PowerPanel Enterprise DCIM platform allows information technology teams to manage, configure and monitor the infrastructure within a data center through the cloud, serving as a single source of information and control for all devices. "These platforms are commonly used by companies managing on-premises server deployments to larger, co-located data centers - like those from major cloud providers AWS, Google Cloud and Microsoft Azure," the researchers said. Dataprobe manufactures power management products that assist businesses in monitoring and controlling their equipment. The iBoot-PDU allows administrators to remotely manage the power supply to their devices and equipment via a "simple and easy-to-use" web application, according to the researchers, who added that the devices are "typically found in small to midsized data centers and used by SMBs managing on-premises server deployments."


Hybrid mesh firewall platforms gain interest as management challenges intensify

"A hybrid mesh firewall makes you highly dependent on one single vendor," says John Carey, managing director of the technology solutions group at global consulting firm AArete. "Some organizations prefer to have best-of-breed and select the right tool for the right job. You'll see CrowdStrike running alongside CyberArk running alongside Juniper running alongside Cisco. You don't see many organizations doing a blanket removal, taking out all those tools and putting in one. It's costly, and they don't want to be totally dependent on that one vendor." With a hybrid mesh firewall only able to manage firewalls from that one vendor, that could be a problem for those companies. Alternatively, an enterprise can use an NSPM product from a vendor such as Tufin or Firemon, says Scott Wheeler, cloud practice leader at Asperitas Consulting, an IT and cloud services firm. "They are not firewall products, but they do enable the concept of hybrid mesh firewall. So, depending on how you look at the semantics, they are more of a hybrid mesh firewall solution because you can manage across different firewall providers."


Why the cyber skills crisis is an opportunity to transform your cybersecurity

A strategic approach is needed for security leaders and their teams to address the resource crisis. A key response emerging in the market is security vendor consolidation. According to Gartner, 75% of organizations were pursuing consolidation in 2022, almost tripling since 2020. Considering that an alarming 35% of cyber budgets are being spent on tools that don’t give a measurable improvement in cybersecurity posture, it’s evident why businesses are seeking to consolidate and do more with less. However, there is a degree of caution around consolidating vendors and tools. Nearly four in five security leaders and decision-makers admitted to being concerned that consolidation will reduce their ability to mitigate cyber risk. But we found this skepticism to be unfounded. In reality, half of those who have begun consolidating have seen an improvement in security posture as a result. This is because, when approached strategically, consolidation streamlines security operations. 


Industrial modernization: Becoming future-ready in uncertain times

Future-ready companies have already embraced agile practices and distributed computing technologies like edge computing, containers, and microservices to optimize existing systems and drive innovation. IT modernization is the practice of updating older software and infrastructure to newer computing approaches, including languages, frameworks, architectures, and infrastructure platforms. It does not require wholesale replacement> if done well, modernization can extend the lifespan of an organization’s software and infrastructure while taking advantage of recent innovation. While the term legacy may have a negative connotation in technology, these systems are often the bedrock of a company’s business operations. Modern, cloud-native computing paradigms are distributed by nature. Modernization shifts the technology stack from a tightly coupled, hierarchical, siloed, and point-to-point structure to one that is application-driven, loosely coupled, software-defined, and integrated across all layers of the architecture.


Interrogate Your Software with AI — The Future for SREs

With AI-driven incident analysis, we gain the capability to process data rapidly and recognize correlations that otherwise might have been overlooked. This empowers us to take proactive measures and predict potential incidents using historical data, breaking free from the limitations of reactive maintenance. Moreover, AI-powered analysis can play a vital role in assisting SREs in determining the severity of incidents. By defining criteria for incident severity classification and relying on AI insights, we can make more informed decisions and prioritize response efforts efficiently. Resource allocation, a crucial aspect of SRE, can be guided by AI-generated statistics that paint a clear picture of an incident’s impact and resource requirements, enabling us to scale responses based on severity and complexity. Finally, we can’t forget about incident reports, documentation and runbooks. We all know how bad those can be. Depending on who triaged the incident, what’s reported and documented can range from a simple paragraph to pages of in-depth research and analysis. 



Quote for the day:

"The problem with being a leader is that you're never sure if you're being followed or chased." -- Claire A. Murray

Daily Tech Digest - August 16, 2023

The looming battle over where generative AI systems will run

What is becoming more apparent is that the location where most generative AI systems will reside (public cloud platforms versus on-premises and edge-based platforms) is still being determined. Vellante’s article points out that AI systems are running neck-and-neck between on-premises and public cloud platforms. Driving this is the assumption that the public cloud comes with some risk, including IP leakage, or when better conclusions from your data appear at the competition. Also, enterprises still have a lot of data in traditional data centers or on edge computing rather than in the cloud. This can cause problems when the data is not easily moved to the cloud, with data silos being common within most enterprises today. AI systems need data to be of value, and thus it may make sense to host the AI systems closest to the data. I would argue that data should not exist in silos and that you’re enabling an existing problem. However, many enterprises may not have other, more pragmatic choices, given the cost of fixing such issues. 


Quantum Computing: Australia’s Next Great Tech Challenge & Opportunity

One of the big opportunities for Australia in this space will be its close relationship with the United States. Because of the sheer value of quantum computing research and technology across both military and civilian IP, nations tend to be more circumspect about sharing information in comparison to conventional technology. The downside to this is that it means the U.S. isn’t able to draw on the same global pool of talent that it’s used to. A shortage of talent isn’t such a major issue in regular computing fields because global talent tends to pool and openly share information. ... “As other nations push forward, Australia risks missing out on the potential economic benefits,” a report by the University of Sydney notes. “We could also lose talented workers to countries that are investing more in quantum research. “Projects like the ambitious attempt to build the world’s first complete quantum computer aim to provide local opportunities and funding alongside their top-line goals. Moreover, Australia has a responsibility to ensure quantum technologies are developed and used ethically, and their risks managed.”


Q&A: An Introduction to Streaming AI

Streaming AI is about continuously training ML models using real-time data, sometimes with human involvement. The incoming data streams from many sources are analyzed, combined with contextual information, and matched against features that carry condensed information and intelligence specific to the given problem. ML algorithms continually generate these features using the most current data available. On the other hand, as noted earlier, generative AI focuses on generating responses based on a “seed” and then a pattern for finding the next thing to tack on. This works to generate content that conforms to certain parameters the model has “learned.” It is bounded, but not in a way that the boundaries can be easily understood. Until the recent rise of LLMs, considerable effort was invested in making ML models explainable to humans. The question was: how does the model arrive at its result? The “I have no idea” response is hard for humans to accept. In the made-up legal case citations example, the LLM program generated a motion that argued a point, but when asked to explain or validate its path, it just made some stuff up.


CISO’s role in cyber insurance

Enter cyber insurance, a safety net that offers organisations a way to mitigate the financial impact of these cyber incidents. However, navigating the complex landscape of cyber insurance is no small feat. This is where the Chief Information Security Officer (CISO) comes into play. As the vanguard of an organisation’s cybersecurity efforts, the CISO not only ensures that digital fortresses are robust but also plays a pivotal role in the realm of cyber insurance. Their expertise and insights are instrumental in assessing risks, selecting the right coverage, and ensuring that the organisation gets the most out of its policy. In essence, the CISO bridges the gap between the technical world of cybersecurity and the financial realm of insurance, ensuring that businesses are both well protected and well insured. ... As the primary custodian of an organisation’s cybersecurity posture, the CISO is responsible for conducting a thorough risk assessment. This involves identifying potential vulnerabilities, assessing the potential impact of different types of cyber incidents, and estimating the financial costs associated with these incidents.


Bolstering Africa’s Cybersecurity

In recent weeks and months, we have seen opportunities arise, often provided by academia and government, to improve cyber education. However, some parts of Africa are still without decent levels of electricity. So, is the dream of cyber education for all unattainable? ... Despite this, Africa-based data security analysts point out that a dearth of qualified technicians coupled with a lack of investment in cybersecurity has been the direct contributor to a growth in the amount and scale of successful cyberattacks. In fact, according to research from IFC and Google, Africa’s e-economy is expected to reach $180 billion by 2025, but its lack of security support could halt that growth. Most of these campaigns are based upon spam or phishing efforts derived from information garnered from open source intelligence (OSINT), which is often more effective against a remote workforce that may be more exposed to attack techniques while outside of the technical and administrative controls of traditional office work.


Everything Can Change: The Co-Evolution of the CMO and the CISO

Organizations with an established partnership between the CISO and CMO tend to outperform their competitors. This collaboration allows for a cohesive approach to risk management and brand protection, resulting in increased customer trust and loyalty. Organizations that view the CISO purely as a technical operational leader often struggle with cybersecurity initiatives and fail to align security measures with business goals. This approach limits the potential for strategic contributions from the CISO in driving revenue growth and defending value. On the other hand, organizations that integrate the CISO into the go-to-market strategy leverage their expertise to address security concerns proactively, enhancing customer trust and differentiating themselves from competitors. By combining security practices with marketing efforts, these organizations can communicate their commitment to data protection and establish a competitive advantage in terms of trustworthiness. Effective CISOs have a seat at the executive table, allowing them to more directly align security initiatives with business outcomes. 


Machine unlearning: The critical art of teaching AI to forget

Machine unlearning is the process of erasing the influence specific datasets have had on an ML system. Most often, when a concern arises with a dataset, it’s a case of modifying or simply deleting the dataset. But in cases where the data has been used to train a model, things can get tricky. ML models are essentially black boxes. This means that it’s difficult to understand exactly how specific datasets impacted the model during training and even more difficult to undo the effects of a problematic dataset. OpenAI, the creators of ChatGPT, have repeatedly come under fire regarding the data used to train their models. A number of generative AI art tools are also facing legal battles regarding their training data. Privacy concerns have also been raised after membership inference attacks have shown that it’s possible to infer whether specific data was used to train a model. This means that the models can potentially reveal information about the individuals whose data was used to train it.


Unit Tests Are Overrated: Rethinking Testing Strategies

Unit tests fare much more poorly with this metric than most people realize. The first problem is that they often don’t provide useful information about the actual state of the system under review. When unit tests are written as acceptance tests, they are often intricately coupled with the specific implementation. They will only fail if the implementation changes, not when changes break the system (e.g., verifying the value of a class constant). Using acceptance tests as regression tests must be done intentionally and thoughtfully, deleting everything that does not provide useful information about the system’s behavior. Another major problem with unit tests is that to test the inputs of one method, you often need to mock out the responses from other methods. When you do this, you are no longer testing the system you have, you are testing a system that you assumed you had in the past. The system can break and a unit test will not fail because it had an assumption that an input would be received that the real-world system no longer supplies. 


The vital role the CISO has to play in the boardroom

Cybersecurity risk management and information governance are complex and gritty subjects which can be hard to follow for the uninitiated. Boardrooms aren’t the place for the ins and outs of the issue at hand. Learning to communicate effectively is possibly the single most important skill for aspiring and ambitious CISOs. Throughout history, great leaders have demonstrated an excellent ability to communicate, bringing people on a journey with them and gathering support along the way. This is not about dumbing down or glossing over the important parts. Rather, it’s about honing a fundamental business skill: being able to make a compelling argument clearly and concisely. You need to be able to translate critical cybersecurity information into business objectives. Cybersecurity risk management is a regulated requirement. Board directors, officers and senior management can be held liable for the decisions they make around cybersecurity risks and incidents. Clear and effective communication is critical in supporting organisations to make the right decisions that could be later relied upon to protect its people.


3 strategies that can help stop ransomware before it becomes a crisis

Without an incident response plan in place, companies typically panic, not knowing who to call, or what to do, which can make paying the ransom seem like the easiest way out. With a plan in place, however, people know what to do and will ideally have practised the plan ahead of time to ensure disaster recovery measures work the way they're supposed to. ... Having multiple layers of defense, as well as setting up multifactor authentication and data encryption, are fundamental to cybersecurity, but many companies still get them wrong. Stone recently worked with an educational organization that had invested heavily in cybersecurity. When they were hit by ransomware, they were able to shift operations to an offline backup. Then the attackers escalated their demands -- if the organization didn’t pay the ransom, their data would be leaked online. “The organization was well prepared for an encryption event, but not prepared for the second ransom,” Stone says. “There was actual sensitive data that would trigger a number of regulatory compliance actions.”



Quote for the day:

"Added pressure and responsibility should not change one's leadership style, it should merely expose that which already exists." -- Mark W. Boyer

Daily Tech Digest - August 15, 2023

How to build employee trust as AI gains ground

Most experts agree, however, that newer AI tools are less about replacing people and more about eliminating mundane, manual, or number-crunching tasks that most employees already hate. In fact, the technology will mostly help free up workers to tackle more important tasks such as project management, data science research and, perhaps most importantly, creative thinking and problem solving. "There is no example today of an AI system that can perform data science totally independent of people," said Erick Brethenoux, a distinguished vice president analyst at research firm Gartner. A lot of the uncertainty and fear workers feel about generative AI tools is based on ignorance, experts say. AI, in its many forms, has been around for more than 50 years, but many people simply don’t recognize it’s been beside them all this time. “People have always been afraid of AI because the vision they have of it is science fiction; it’s a Hollywood vision of it,” Brethenoux said. “There’s a lot of hype around it."


Red Hat rivals form Open Enterprise Linux Association

At the heart of the new organization is a disagreement over the way Red Hat, long the dominant force in enterprise Linux, provides access to its source code. For years, the company supported the development of a Red Hat Enterprise Linux clone called CentOS, with the idea of providing a free alternative for testing and development purposes, given that paid support would be unnecessary for that purpose. However, increasingly, users began to implement CentOS instead of RHEL in production environments as well, with other companies, including CIQ, springing up to provide enterprise support. Accordingly, Red Hat stopped supporting CentOS in its previous form two years ago, in favor of an alternative called CentOS Stream. That, however, is an upstream distribution, meaning that it’s updated much more frequently, making it less suitable for production work. And earlier this summer, Red Hat made its source code less accessible, restricting access to paying Red Hat customers and obscuring some details of the way the code is put together to create the final distribution.


How FraudGPT presages the future of weaponized AI

FraudGPT signals the start of a new, more dangerous and democratized era of weaponized generative AI tools and apps. The current iteration doesn’t reflect the advanced tradecraft that nation-state attack teams and large-scale operations like the North Korean Army’s elite Reconnaissance General Bureau’s cyberwarfare arm, Department 121, are creating and using. But what FraudGPT and the like lack in generative AI depth, they more than make up for in ability to train the next generation of attackers. With its subscription model, in months FraudGPT could have more users than the most advanced nation-state cyberattack armies, including the likes of Department 121, which alone has approximately 6,800 cyberwarriors, according to the New York Times — 1,700 hackers in seven different units and 5,100 technical support personnel. While FraudGPT may not pose as imminent a threat as the larger, more sophisticated nation-state groups, its accessibility to novice attackers will translate into an exponential increase in intrusion and breach attempts, starting with the softest targets, such as in education, healthcare and manufacturing.


Application Rationalization: Is Complexity Avoidable?

Removing the clutter from your application portfolio is its own reward. Simplifying your software means: easier maintenance; greater agility; lower training requirements; reduced costs; faster rationalization in future. This is, indeed, all possible to achieve. With unlimited budget, and a willingness to both make tough choices about stripping back applications and be strict with your colleagues, you could of course remove all complexity from your portfolio. The question remains, however: should you? Fully optimizing your application portfolio is costly, time-consuming, and will likely cause a lot of frustration for software users along the way. True application rationalization involves a balancing act between technical debt and optimization, meaning some complexity will likely need to be tolerated. If your team communicates via Slack, for example, it would be easier to remove email and Zoom licenses. However, if your external stakeholders don't use Slack Connect, you could cripple your company's ability to function by doing so.


How to take action against AI bias

With AI adoption increasing rapidly, it’s critical that guardrails and new processes be put in place. Such guidelines establish a process for developers, data scientists, and anyone else involved in the AI production process to avoid potential harm to businesses and their customers. One practice enterprises can introduce before releasing any AI-enabled service is the red team versus blue team exercise used in the security field. For AI, enterprises can pair a red team and a blue team to expose bias and correct it before bringing a product to market. It’s important to then make this process an ongoing effort to continue to work against the inclusion of bias in data and algorithms. Organizations should be committed to testing the data before deploying any model, and to testing the model after it is deployed. Data scientists must acknowledge that the scope of AI biases is vast and there can be unintended consequences, despite their best intentions. Therefore, they must become greater experts in their domain and understand their own limitations to help them become more responsible in their data and algorithm curation.


3 Ways Enterprise Architects Can Bridge the Socio-Technical Gap

Software architecture is often a series of trade-offs. However, for people not involved in the original decision, it is often no longer clear what the trade-off was or how that trade-off led to the decision. One approach to capturing these decisions is Architecture Decision Records (ADRs). Note that ADRs are not some kind of technical rule, they are essentially a document. But having such a document can be a useful communication device, as it creates a history that allows people to keep track of trade-offs made in the past. The code and architecture themselves can only communicate the current state, but not how that current state came to be. Note that recording decisions doesn’t make them permanent or immutable. ... Capturing the rationale behind architectural decisions through methods like Architecture Decision Records ensures a clear understanding of trade-offs made over time. Additionally, addressing architecture incrementally, akin to code-level refinements, offers a practical way to manage risk and avoid conflicting priorities.


Broken Promises of the Low-Code Approach

The reality is that many low-code solutions present a fundamental misunderstanding of software development: They conflate the challenge of understanding a programming language’s syntax with the challenge of designing effective application logic. Programming languages are just tools; their syntax is merely a means of expressing solutions. The true heart of software development lies in problem-solving, in crafting algorithms, data structures and interfaces that efficiently fulfill the application’s needs. By aiming to simplify software development through a graphical user interface (GUI), low-code solutions replace syntax without necessarily simplifying the fundamental challenge of designing robust applications. This approach can introduce multiple drawbacks while failing to alleviate the true complexities of software creation, ultimately having a negative impact on your team’s ability to deliver real value. ... Low-code solutions frequently grapple with limited customization, often failing to meet specific, complex or unique business requirements. The risk of vendor lock-in is another significant downside, potentially leaving users high and dry if there are changes in pricing, feature offerings or if the vendor closes shop.


Micro transformation: Driving big business benefit through quick IT wins

While it’s still early days to determine the success of the micro transformation, the initial customer feedback has been encouraging, Aird says. “There’s something intrinsically rewarding when you hear directly from customers about how much they’re enjoying the new tool, how it’s adding value to their purchasing experience, and how it makes the process of creating their own neon signs easier and more fun and exciting.” This is critical because Custom Neon operates in a “highly saturated e-commerce niche,’’ he adds, and micro transformations such as upgrading the website tool “subtly, but surely redefine the customer experience, contributing to our continued growth and competitiveness.” This kind of micro transformation underscores the power of agile methodology, enabling IT to identify bottlenecks, implement targeted improvements, and quickly see the effects, Aird says. “Moreover, they allow us to enhance our KPIs, notably in customer satisfaction and operational efficiency.”


Cybersecurity hiring gap: Time to rethink who can contribute

Ford sees the "cybersecurity talent shortage" as misidentified, he refers to the situation as an "experience shortage." As we all know, the only way to garner experience is by doing. He opened doors to "overlooked" talent, with the creation of their Cybersecurity Career Reboot Program. The program's key factor probably broke every HR sorting tool, as they sought out individuals who had been passed over because the "lack the experience required to land entry-level jobs." ... They then used their Professional Rotation Experience Program (PREP), which took recent grads and put them in "two-year rotational program that includes global exposure to all our cybersecurity functions. PREP participants gain experience with the foundations of cybersecurity through hands-on project work, exposure to a variety of experiences, and innovative training and development, rotating through the different teams within cybersecurity every six months during the program." While the focus of homegrown talent programs is on the new and eager employees, CISOs must also keep an eye on retaining and improving the talent already in place.


Generative AI – What Are the Legal Issues?

The pace of the development of AI far outstrips the legal, regulatory and ethical frameworks which need to be put in place to ensure that the benefits of AI are carefully considered. For anyone looking at adopting or developing AI technologies, risk assessments should be conducted to identify and mitigate the impact on individuals. ... Considering the dataset used to teach the algorithm will potentially identify areas of risk. For example, an AI designed to sift CVs and provide hiring recommendations might inherit any unconscious hiring biases from the underlying dataset of ‘successful applicant’ and ‘unsuccessful applicant’ CVs. Not all algorithms are born equal and consideration should be given to the sophistication and development of any product before use given the potential impact on individuals. ... As Gen AI can create new content, who will own the intellectual property in any new work, media, image or music? There may be IP issues if the Gen AI creator did not have sufficient rights to the information used in the training dataset and any contract should clearly set out IP ownership where possible.



Quote for the day:

"It is the responsibility of leadership to provide opportunity, and the responsibility of individuals to contribute." -- William Pollard

Daily Tech Digest - August 11, 2023

How to tell if your cloud finops program is working

A successful finops program should ensure compliance with applicable financial regulations and industry standards. These change across industries, but a few industries, such as finance and health, are more constrained by rules than others. A good finops program will help your company stay current with relevant laws, rules, and regulations, such as GAAP (generally accepted accounting principles) or IFRS (International Financial Reporting Standards). Regular audits and reviews should be conducted to ensure that financial processes and practices align with the required standards and laws. These are often overlooked by cloud engineers and cloud architects building and deploying cloud-based systems since most of them don’t have a clue about regulations and laws beyond the basics. If done well, finops should take the stress off those groups and automate much of what needs to be monitored regarding regulatory compliance. I was early money on finops, and for good reason. We need to understand the value of cloud computing right after deployment and monitor its value continuously. 


Why Data Science Teams Should Be Using Pair Programming

Based on what we learn about the data from EDA, we next try to summarize a pattern we’ve observed, which is useful in delivering value for the story at hand. In other words, we build or “train” a model that concisely and sufficiently represents a useful and valuable pattern observed in the data. Arguably, this part of the development cycle demands the most “science” from data scientists as we continuously design, analyze and redesign a series of scientific experiments. We iterate on a cycle of training and validating model prototypes and make a selection as to which one to publish or deploy for consumption. Pairing is essential to facilitating lean and productive experimentation in model training and validation. With so many options of model forms and algorithms available, balancing simplicity and sufficiency is necessary to shorten development cycles, increase feedback loops and mitigate overall risk in the product team. As a data scientist, I sometimes need to resist the urge to use a sophisticated, stuffy algorithm when a simpler model fits the bill.


Should IT Reinvent Technical Support for IoT?

A first step is to advocate for IoT technology purchasing standards and to gain the support of upper management. The goal should be for the company to not purchase any IoT technology that fails to meet the company’s security, reliability, and interoperability standards, which IT must define. None of this can happen, of course, unless upper management supports it, so educating upper management on the risks of non-compliant IoT, a job likely to fall to the CIO, is the first thing that should be done. Next, IT should create a “no exceptions” policy for IoT deployment that is rigorously followed by IT personnel. This policy will make it a corporate security requirement to set all IoT equipment to enterprise security standards before any IoT gets deployed. Finally, IT needs a way to stretch its support and service capabilities at the edge without hiring more support personnel, since budgets are tight. If something goes wrong at your manufacturing plant in Detroit while technical issues arise at your San Diego, Atlanta, and Singapore facilities, it will be a challenge to resolve all issues simultaneously with equal force.


Why AI Forces Data Management to Up Its Game

With so much storage growth, organizations never reach the point where storage is no longer a constant challenge. The combination of massive capacity growth and democratized AI make it imperative to implement effective data management from the edge to the cloud. A strong foundation for artificial intelligence necessitates well-organized data stores and workflows. Many current AI projects are faltering due to a lack of data availability and poor Data Management. Skilled Data Management, then, has become a key factor in truly realizing the potential of AI. But it also plays a vital role in containing storage costs, hardening data security and cyber resiliency, verifying legal compliance and enhancing customer experiences, decision-making, and even brand reputation. ... Using metadata and global namespaces, the Data Management layer makes data accessible, searchable, and retrievable on whatever storage platform or media it may reside. It adds automation to facilitate tiering of data to long-term storage as well as cleansing data and alerting on anomalous conditions.


Hybrid work is entering the 'trough of disillusionment'

Even though remote and hybrid work practices are in the trough now, that doesn’t mean they’ll stay there. Some early adopters eventually overcome the initial hurdles and begin to see the benefits of innovation and best practices emerge. Until then, the return-to-office edicts continue to roll out. ... Even with an uptick in return-to-office mandates, office building occupancy continues to remain below pre-pandemic levels. The average weekly occupancy rate for 10 metropolitan areas in the United States this week was below 50% (48.6%), according to data tracked by workplace data company Kastle Systems. That occupancy rate is actually down 0.6% from last week. Office occupancy rates change substantially, depending on the day of the week. Tuesdays, Wednesdays and Thursday are the most popular in-office days. Globally and in the US, organizations have moved from ad hoc hybrid work policies, where employees could pick their days in the office, to structured schedules.


Cisco: Hybrid work needs to get better

While organisations in APAC have been progressive in adopting hybrid work arrangements, Patel cautioned them against making the mistake of mandating that employees work in the office all the time. “It’s much better to create a magnet than a mandate,” he said. “Give people a reason to come back to the office because when they collaborate in the office, there’s going to be this X factor that they don’t get when they are 100% remote.” Patel said adopting hybrid work would also help organisations recruit the best talent from anywhere in the world, enabling more people to participate equally in a global economy. “The opportunity is very unevenly distributed right now, but human potential is pretty evenly distributed, so it would be nice if anyone in a village in Bangladesh can have the same economic opportunity as someone in Silicon Valley. “Most of the time, the mindset is that you are distance-bound, so if you don’t happen to be in the same geography, then you don’t have access to opportunity. That’s a very archaic way of thinking and we need to think about this in a much more progressive manner,” he said.


Rethinking data analytics as a digital-first driver at Dow

The first step in this journey involved bringing our D&A teams under one roof in the first half of 2022. This team eventually became Enterprise D&A, with team members based around the world. To develop the strategy, we held discussions with external partners and interviewed Dow leaders to identify trends important to business success. Then we looked at where those trends align with key focus areas like customer engagement, accelerating innovation, market growth, reliability, sustainability, and the employee experience. Our central task was to translate our findings into a strategy that creates the most value for our stakeholders: our customers, our employees, our shareholders, and our communities. We determined we needed to move to a hub-and-spoke model. To make this work and achieve our vision of transforming data into a competitive advantage, we would need to build a strong culture of collaboration around D&A and support it with talent development within our organization and across the company.


Why data isn’t the answer to everything

What happens when you disagree with the AI? What are you then going to go and do? If you’re always going to disagree with it and do what you wanted to do anyway, then why bother bringing the AI in? Have you maybe mis-written your requirements and what that AI system is going to go and do for you? A lot of this is the foundational strategy on organisational design, people design, decision making. As an executive leader, it’s really easy to stand up on stage and say, ‘Here’s our 2050 vision or our 2030 vision.’ At the end of the day, an executive doesn’t do much, they just create the environment for things to happen. It’s frontline staff that make decisions. There are two reasons why you wouldn’t make a decision: you don’t have the right data and context or you don’t have the authority to make that decision. Typically, you only escalate a decision when you don’t have the data and context. It’s your manager that has more data and context, which enables that authority. So, with more data and context, I can push more authority and autonomy down to the frontline to actually go and drive transformation. 


Whirlpool malware rips open old Barracuda wounds

The vulnerability, according to a CISA alert, was used to plant malware payloads of Seapsy and Whirlpool backdoors on the compromised devices. While Seapsy is a known, persistent, and passive Barracuda offender masquerading as a legitimate Barracuda service "BarracudaMailService" that allows the threat actors to execute arbitrary commands on the ESG appliance, Whirlpool backdooring is a new offensive used by attackers who established a Transport Layer Security (TLS) reverse shell to the Command-and-Control (C2) server. "CISA obtained four malware samples -- including Seapsy and Whirlpool backdoors," the CISA alert said. "The device was compromised by threat actors exploiting the Barracuda ESG vulnerability." ... Whirlpool was identified as a 32-bit executable and linkable format (ELF) that takes two arguments (C2 IP and port number) from a module to establish a Transport Layer Security (TLS) reverse shell. A TLC reverse shell is a method used in cyberattacks to establish a secure communication channel between a compromised system and an attacker-controlled server.


How digital content security stays resilient amid evolving threats

AI technology advancements and the great opportunities it provides have also motivated business leaders and consumers to reassess the underlying trust models that have made the internet work for the past 40 years: every major advance in computing tech has stimulated sympathetic updates in the computer security industry, and this recent decisive move into a world powered by data, and auto-generated data, is no different. Provenance will become a key component in determining the trustworthiness of data. The changes though extend beyond technology. Rather than continuing to use systems that were built to assume trust and then verify, businesses and consumers will change and use verify then trust systems which will also bring mutual accountability into all processes where data is shared. Standards, open APIs and open-source software have proven to be adaptable to changing technology previously and will continue prove adaptable in the age of AI and significantly higher volumes of digital content.



Quote for the day:

"He who wishes to be obeyed must know how to command" -- Niccol_ Machiavelli