How JustAnswer Works:

  • Ask an Expert
    Experts are full of valuable knowledge and are ready to help with any question. Credentials confirmed by a Fortune 500 verification firm.
  • Get a Professional Answer
    Via email, text message, or notification as you wait on our site.
    Ask follow up questions if you need to.
  • 100% Satisfaction Guarantee
    Rate the answer you receive.

Ask Chris Parker Your Own Question

Chris Parker
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience:  Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
4769983
Type Your Homework Question Here...
Chris Parker is online now
A new question is answered every 9 seconds

* What are important considerations for an organization

Customer Question

* What are important considerations for an organization to dispose of old computer equipment? What methods would work best for the organization in which you are working or have worked?

* Why is it recommended to establish a formal evaluation criterion when considering the purchase of hardware for the organizations? Are any of the criteria from the wireless laptop applicable to any other types of hardware to be purchased for your organization?
Submitted: 4 years ago.
Category: Homework
Expert:  Chris Parker replied 4 years ago.
Hi!

Is this the question you were referring to in the other thread?

Regards,
Chris
Customer: replied 4 years ago.
yes indeed.
Expert:  Chris Parker replied 4 years ago.
I can answer this by tomorrow afternoon. Does that work for you?

-Chris
Customer: replied 4 years ago.
yes that would be great thanks
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my thoughts on this topic from the following link: Click.

You can use the inputs from my response to answer your questions.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
i do have a new work for you. could i post please?
Expert:  Chris Parker replied 4 years ago.
Please post and I will let you if I can work on it.

Thanks,
Chris
Customer: replied 4 years ago.
  • What are the major differences between Java programming language and any other language? List and discuss three items.
  • How could your company utilize PDA technology to improve efficiencies? Would there be a time, cost, or labor savings?
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to your questions from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Thanks for your help. I do have some more work for you.
Customer: replied 4 years ago.
Based on the article "Fast Windows Fixes", how would you (or do you) use your Windows knowledge to be efficient at trouble-shooting the basics? Do you think that knowing the basics is the standard today?
Expert:  Chris Parker replied 4 years ago.
Thanks for the new question.

Download my response to your new question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.

could you help with this please?

 

What is an example of a "data mining" concept?

What is the key benefot of a "Data Warehouse?"
What is the caveat of duplication in data?

Expert:  Chris Parker replied 4 years ago.
---

Edited by XXXXX XXXXX on 8/3/2010 at 10:38 PM EST
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Thanks for your help as always.
Expert:  Chris Parker replied 4 years ago.

You are very welcome. Thanks for requesting me.

 

-Chris

Customer: replied 4 years ago.

* How important is wireless networking to your company? Would your company improve the sales/service process utilizing wireless networking?

* Would you utilize a WEP enabled phone to perform stock trades? Why or why not?

* With each of the above technologies, how is security important? Would you trust the methods that exist today to secure these technologies?

Expert:  Chris Parker replied 4 years ago.

There seem to be three separate questions. How long should the response to each question be? Let me know.

 

Regards,

Chris

Customer: replied 4 years ago.

As long as you can would be fine.

 

Thanks

Expert:  Chris Parker replied 4 years ago.
I sent a note to you through the moderators. Please let me know once you receive it.

Thanks,
Chris
Customer: replied 4 years ago.
oh thats fine with me. you could have told me that on your own. you always did a great job for me. $30 is okay by me.
Expert:  Chris Parker replied 4 years ago.
Thanks. I didn't tell you on my own because Just Answer experts are not allowed to discuss price directly with customers.

Please let me know when you need these questions answered by.

Regards,
Chris
Customer: replied 4 years ago.
Wednesday would be great
Expert:  Chris Parker replied 4 years ago.
Ok. I will complete by then.

-Chris
Customer: replied 4 years ago.
Thanks
Expert:  Chris Parker replied 4 years ago.

I just wanted to update you that I will be posting by today evening.

 

Regards,

Chris

Customer: replied 4 years ago.
ok thanks
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to your questions from the following link: Click.

As agreed, please increase the value to $30 when you accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Expert:  Chris Parker replied 4 years ago.

I was wondering if you needed more help with these questions.

 

Regards,

Chris

Customer: replied 4 years ago.

Thanks for the wonderful work you did for me.

Customer: replied 4 years ago.
I click accept and it only took $15. the work is for $30. how do i pay the rest of the $15 to you?
Customer: replied 4 years ago.

Hello,

 

I have not got a reply from you yet?

Expert:  Chris Parker replied 4 years ago.

You could leave the rest as a bonus by clicking a bonus button on this page.

 

Regards,

Chris

Customer: replied 4 years ago.
I just did that now. did you get it?
Customer: replied 4 years ago.

Computer business systems have been around for 40 years. Desktop computers have been around for 20 years. Why do business systems, in general, still need more development?

 

Responses must be at least 200 words.

Expert:  Chris Parker replied 4 years ago.

I got it. Thanks. I can answer this question by Wednesday evening. Does that work for you?

 

Regards,

Chris

Customer: replied 4 years ago.
Yes wednesday evening would be ok thanks.
Expert:  Chris Parker replied 4 years ago.
Some unexpected personal work has come up suddenly, so I'm afraid I wouldn't be able to post the answer today. Is it ok if I post the answer tomorrow?

Regards,
Chris
Customer: replied 4 years ago.
ok but I hope not too late tomorrow? Thanks
Customer: replied 4 years ago.
Is the work ready please?
Expert:  Chris Parker replied 4 years ago.
I will be posting in an hour. Sorry for the delay.

Regards,
Chris
Expert:  Chris Parker replied 4 years ago.
Download my response from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.

According to the Tsai article (2010), how are cell phone and geographic information system (GIS) impacting business systems?

 

Must be at least 200-300 words.

Expert:  Chris Parker replied 4 years ago.
Could you please post Tsai article (2010)?

Regards,
Chris
Customer: replied 4 years ago.
Expert:  Chris Parker replied 4 years ago.
Thanks. I can go through it and provide the answer by tomorrow morning. Let me know if that works for you.

Regards,
Chris
Customer: replied 4 years ago.

Tomorrow morning sounds great thanks.

Expert:  Chris Parker replied 4 years ago.
In fact, I just finished answering your question.

Download from the following link: Click.

Please review and accept.

Regards,
Chris

Edited by XXXXX XXXXX on 8/20/2010 at 6:28 PM EST
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
I have some more working coming you way. Thanks alot.
Expert:  Chris Parker replied 4 years ago.

I'll look forward to it.

 

Thanks,

Chris

Customer: replied 4 years ago.

How has globalization influenced business?

 

must be at least 200-300 words.

Expert:  Chris Parker replied 4 years ago.
When do you need this by?

Regards,
Chris
Customer: replied 4 years ago.
Thursday would be great please.
Expert:  Chris Parker replied 4 years ago.

Ok. I will post by today evening.

 

Regards,

Chris

Expert:  Chris Parker replied 4 years ago.

Download my response from the following link: Click.

Please review and accept.

Thanks,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.

Based on the article by Berrone et al. (2007), how would you define corporate ethical identity (CEI)? Describe the difference between corporate revealed ethics and corporate applied ethics.

 

Responses must be at least 200-300 words.

 

article by Berrone et al. (2007),

Expert:  Chris Parker replied 4 years ago.
Hi!

Thanks for the new question.

Download my response from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Thanks alot. I would be sending you two questions that I would pay for each of them.
Customer: replied 4 years ago.
  • In the United States, publicly traded companies must report revenue growth, income growth, and earnings per share (EPS) every quarter. What is the effect of this reporting on business operations?
Customer: replied 4 years ago.
  • Describe three financial statements used by businesses and what each is designed to report.

Both responses must be at least 200-300 words.

Expert:  Chris Parker replied 4 years ago.
Thanks for the new questions. When do you need them answered by?

Regards,
Chris
Customer: replied 4 years ago.
Thursdayy for the first one and friday for the second one please.
Expert:  Chris Parker replied 4 years ago.
Ok; I will take care of them by then.

Regards,
Chris
Expert:  Chris Parker replied 4 years ago.
---

Edited by XXXXX XXXXX on 9/2/2010 at 9:24 PM EST
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to your first question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to the second question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.

A summary of the two questions that i sent you please. Your summary should be at least 200 words.

 

Need by monday please.

Expert:  Chris Parker replied 4 years ago.
---

Edited by XXXXX XXXXX on 9/5/2010 at 10:12 AM EST
Expert:  Chris Parker replied 4 years ago.
Hi!

Download the summary from the following link: Click.

Please review and accept.

Regards,
Chris

Edited by XXXXX XXXXX on 9/5/2010 at 10:15 AM EST
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Expert:  Chris Parker replied 4 years ago.
I was wondering if you needed more help with the previous answer.

Regards,
Chris
Customer: replied 4 years ago.

Electronic media is everywhere. What are trends in marketing that result from the convergence of entertainment, communications, and technology

 

Responses must be at least 200-300 words.

Expert:  Chris Parker replied 4 years ago.
When do you need this by?

Regards,
Chris
Customer: replied 4 years ago.
Tomorrow night would be geat please and I do have another question that i need by friday. I would post that soon thanks.
Expert:  Chris Parker replied 4 years ago.
I will complete by tonight. Post your new question as well.

Regards,
Chris
Customer: replied 4 years ago.

Based on Poynter's (2008) article, how is Facebook used to network better with customers? Provide an example.

 

 

Must be at least 200-300 words

Customer: replied 4 years ago.
Expert:  Chris Parker replied 4 years ago.
I just wanted to update you that I will be posting my response to the first question in 1/2 hr.

Regards,
Chris
Customer: replied 4 years ago.
ok
Expert:  Chris Parker replied 4 years ago.
Download my response to your first question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Expert:  Chris Parker replied 4 years ago.
An unexpected family emergency came up. Is it ok if I answer this question tomorrow?

Regards,
Chris
Customer: replied 4 years ago.
I'm sorry to hear that. Yes tomorrow would be fine thank you for letting me know that.
Expert:  Chris Parker replied 4 years ago.
The link you posted to Poynter's article isn't working. Could you please fix it?

I searched the internet and found this article: http://www.poynter.org/column.asp?id=122&aid=134855

Is it the right one?

Regards,
Chris

Edited by XXXXX XXXXX on 9/11/2010 at 4:41 AM EST
Customer: replied 4 years ago.

Article

 

try this please.

Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.

A summary of the two questions that i sent you please. Your summary should be at least 200 words.

 

Need by tonight please

Expert:  Chris Parker replied 4 years ago.
Hi!

Download the summary from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.

I have 3 questions that I would pay separately for you.

 

1) What is the return on investment (ROI) of developing business systems? (200 words) need by Thursday night.

 

2) Based on Welch and Kordysh's (2007) article, what are the seven best practices for ERP implementation? Describe two of the best practices in greater detail. (200 words) need by Saturday.

 

3) Summary of both questions. (200 words) need by Monday

Customer: replied 4 years ago.
Expert:  Chris Parker replied 4 years ago.
The link you posted is not working. Could you please fix it?

Regards,
Chris
Expert:  Chris Parker replied 4 years ago.

Hi!

 

Download my response to the first question from the following link: Click.

 

Please review and accept.

 

Also, please provide a working link to Welch and Kordysh's 2007 article so I can answer the other two questions.

 

Regards,

Chris

Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Customer: replied 4 years ago.

 

Many companies have invested heavily in enterprise resource planning (ERP) software to seamlessly link themselves to their customers, suppliers, and partners. Although the goal was to optimize these relationships and boost operational performance, the results were often disappointing. The good news is that best practices have been revealed. Here is what works and what you need to do to reap the benefits of a fully integrated business.

Serving as a company's central nervous system, ERP systems orchestrate many functions, including order management, materials planning, warehouse management, payables, receivables, and general ledger. Staunchly believing in the ERP promise, companies have spent more than $70 billion worldwide on software licenses in the past 10 years. In addition, implementation resources cost them several times more than the license fees. Implementation time ranged from six months to four years, depending on the number of business units, functionality scope, and the configuration's complexity. From the time of the initial investment decision, it took medium-size to large companies at least five years to achieve steady-state performance levels and recoup their investment.

Companies can learn from lessons of past implementations. Many programs were overly focused on IT functionality at the expense of business process development. As a result, their expected benefits were compromised or delayed. Conversely, the best performers ensured that process management, governance, and other nontechnical issues were addressed properly.

ERP implementations encounter a set of common challenges (see Table 1). Fortunately, there's also a defined set of best practices, which we discuss in this article. They include:

1. Secure executive alignment for the broad-based ERP plan,

2. Establish the right governance model,

3. Emphasize business process transformation,

4. Ensure ongoing ERP support,

5. Address organizational needs head-on,

6. Keep the business mission top of mind, and

7. Manage IT infrastructure relentlessly.

These practices apply long after the ERP go-live event. When a management team takes a second look, it often uncovers issues that had been neglected before. In this "post-implementation" phase, companies work to realize the originally planned (but often underachieved) results by addressing business process management, adding new functionality upgrades, and driving continuous improvement.

1. SECURE EXECUTIVE ALIGNMENT FOR THE BROAD-BASED ERP PLAN

Top performers clearly articulate the planned changes and show how these changes will support the company's strategy. A well-engineered plan, with a robust, multiyear roadmap and measurable milestones, is a must for ensuring alignment throughout the ERP project and beyond. The executive team must commit to the initiative and ensure the organization understands what needs to change and when.

Even if the executives aren't on the same page at first, building a comprehensive ERP roadmap can help generate the necessary alignment. Consider a Midwestern medical device company (MMDC) that outgrew its original computer systems through its business success. IT expenditures were high, and users complained about functionality limitations and unnecessary constraints. The executive team agreed the time had come to upgrade to a high-end ERP system, but disagreements soon surfaced: Was the estimated $75 million investment worth it? Would corporate or business units fund it? Who should be accountable for achieving the benefits? How much ERP customization would be allowed by the business units? Which business unit should be the guinea pig by going live first?

To break the impasse, a cross-company team-representing each business unit, corporate management, and IT-developed a broad-based but rigorous ERP roadmap. This helped build consensus and guide the project's implementation. Here are some important success factors in achieving executive alignment:

* A senior executive took charge of each improvement initiative to ensure focus and accountability, and bonuses were tied to achieving project goals.

* The company used benchmarks to set aggressive yet achievable targets along with a multilevel dashboard that linked enterprise-level business results to detailed operational metrics.

* Each initiative in the ERP portfolio was independent, with its own business case to prove adequate results. The development of each initiative proceeded through a gate review process to ensure interim milestones were met.

* To understand the true financial impact, a project controller was appointed to diligently track project costs and benefits.

The roadmap provided the planning details and rigorous analysis the executive team needed, and it facilitated alignment among the respective stakeholders. Finally, the roadmap became a basis for evaluating progress and a constant reminder of the targets ahead.

2. ESTABLISH THE RIGHT GOVERNANCE MODEL

The shift from a functionally driven business with disparate information systems and limited visibility of business drivers to a cross-functional business with clear process owners and effective decision making requires a new governance model. Similarly, the shift from highly decentralized business units to a model with many standardized processes requires ongoing governance to allow both operational innovation and process harmonization. Process owners must have the accountability and authority to drive results. The cross-functional management team must make tough decisions that affect multiple business units or functions. Senior management must also visibly drive and support the new ways of working.

Consider a Midwest-based heavy construction equipment (HCE) corporation. HCE encouraged its business units to respond to competitive conditions in their particular markets and define their own IT requirements-using little or no coordination with the other business units.

To fix the shortcomings of its legacy systems and prepare for the Y2K deadline, HCE implemented SAP in the late 1990s. The initial implementation for the high-risk areas-sales, distribution, and finance-was completed on time and within budget. But after the implementation team disbanded, no one governed continuous improvement changes, and no plan existed for adding functionality. Over time, the fragmented infrastructure and disparate processes impaired the company's ability to operate as a unified global business, and operating performance suffered.

Enlarge 200%

Enlarge 400%

 

Table 1: Common ERP Challenges

 

 

Recognizing the need for ongoing governance in the post-implementation environment, HCE implemented a governance model to represent the different business units as well as corporate stakeholders. The ultimate purpose of this governance model was to sustain harmonization of business processes across all business units based on a single SAP instance. At the same time, the model would fail if it proved overly bureaucratic or ineffective in driving change. There were several important success factors.

Establishing the business case for harmonization. As a precursor to standardizing business processes, HCE, with help from a consulting firm, conducted a supply chain benchmark analysis that revealed many operating deficiencies caused by disparate versions of the ERP system. Standardization would result in more efficient inventory management, better order-fulfillment performance, and significant reduction of material costs.

Establishing an effective governing body. A Business Process Governing Board (BPGB), made up of business process owners from the business units, was formed to optimize all business processes.

Applying rigorous criteria for allowing exceptions. The BPGB reviews all business process change requests, and only a small number of circumstances qualify as local exceptions to a global process, such as differing country regulatory requirements.

Continuously prioritizing and tracking approved changes. Once the BPGB agrees on and prioritizes the global process solution, ERP analysts develop the blueprint, update the ERP global template, and roll out the new version to all business units. The BPGB regularly reviews the status of these changes. This process continues to this day and is even more important now than during the implementation phase.

This governance approach has allowed HCE's business units to continue with operational innovation initiatives while preserving the integrity of standardized processes across business units.

3. EMPHASIZE BUSINESS PROCESS TRANSFORMATION

Changes to business processes are what matter most in ERP implementations, and they must be managed before, during, and after implementation. The benefits from the ERP system can't be claimed until the underlying processes change. Since business units sometimes operate like independent fiefdoms, they must mandate that business processes in their respective regions match the company's standard processes.

As initial ERP deployment concludes, post-implementation begins, and a continuous improvement plan picks up where the initial roadmap left off. This plan must define specific performance metrics and targets for major processes, with appropriate phases and milestones for monitoring ongoing improvement against longer-term goals.

Business process management was critical for a global materials company (GMC) whose goal was to move away from local call centers and present a single face to the customer. Instead of making multiple calls to locations throughout the world, a customer can book and confirm an order in minutes instead of hours with a single phone call to a single regional call center. To implement this capability, the company chose a leading ERP system covering all supplier-to-customer processes. Analyses of each major business process indicated that the project would save more than $28 million annually, with a 30% inventory reduction. Important steps in GMC's business process transformation included:

Focusing on delivering new capabilities. GMC enabled a global available-to-promise (ATP) process for customer orders by using the new sales, production planning, quality management, and inventory processes in the ERP system. For the first time, a customer service representative in Europe could take an order from a customer in France for a product manufactured in Ohio and provide a confirmed delivery date-all in a single phone call.

Using a staged approach to Implement complex changes. After demonstrating benefits from the ATP capability, GMC launched an online sales portal. Customers can now place online orders, obtain immediate confirmations, and even evaluate tradeoffs such as getting products shipped from another region instead of having them made in the same region at a later date. Eventually, GMC added advanced planning functionality to drive operational improvements, such as the ability to vary inventory replenishment policies based on the frequency and volume of demand. But the company used this advanced systems functionality only after it mastered the global sales and operations planning process, which allowed it to balance its supply with global demand over a 12- to 36-month planning horizon.

Far too often, companies miss the opportunity to gain competitive advantage through process transformation during ERP implementation. The best performers develop a master plan for process transformation linked to ERP implementation. In the post-implementation phase, the same principle applies. Automating a mediocre process may provide some benefit, but fixing a mediocre process before automating it delivers stronger results.

4. ENSURE ONGOING ERP SUPPORT

Reaping maximum benefits also requires ongoing end-user support, master data maintenance, and plans for realizing additional value from the ERP system. These needs persist forever. HCE neglected to provide adequate postimplementation support and suffered the consequences when business users failed, refused to use, or were unable to fully leverage newly installed ERP capabilities.

Several years ago, HCE implemented a new procurement application with new functionality for spend analysis, supplier quality management, and strategic sourcing. The new application replaced an old system that generated only basic purchase agreements. Even though HCE provided some training, it didn't anticipate serious transition challenges such as employees struggling to adopt the new user interface with a range of new power tools, graphics, etc. The transition from repetitive processing of purchase agreements to providing business analysis also proved highly challenging. To compensate for high turnover and low user proficiency with the new tools, HCE had to add temporary personnel.

Learning from past mistakes, HCE later tackled the following issues:

Addressing multifaceted user readiness. When HCE reimplemented its ERP system, it used a multidimensional training program that required users to be fully ready before the go-live date. Users attended training on both system-level transactions and business processes so they could grasp the bigger picture of what was happening upstream and downstream and how their actions affected others. Users first learned about the planned changes throughout the company, then completed in-depth training and took certification tests. Only then could go-live occur.

Ensuring adequate end-user support. Whether implementing a new ERP system or resuscitating an underperforming one, companies should provide adequate support. Consistent user support and resources helped HCE sustain day-to-day operations in its ERP reimplementation. Specifically, HCE set up specialized help desks and cultivated local super users. To train both new and experienced end users appropriately, HCE provided a number of qualified, accountable trainers, even in the post-implementation phase.

Assigning accountability and resources to groom and polish master data. Data owners, not clerical staff, should maintain master data, such as customer information or supplier lead times. For example, the buyers within the purchasing team are the data owners for supplier lead times.

Planning for maintaining the ERP asset. Ongoing support includes not only systems upgrades and new software functionality but, most importantly, business process improvement. In its post-implementation phase, HCE dedicated a small number of experts and resources outside IT to plan and oversee this ongoing process.

Providing ongoing ERP support Is fundamental to realizing ERP benefits. Key focus areas include end-user support, master data maintenance, and ongoing functionality upgrades. This support should be built into the business plan along with appropriate resource assignments.

5. ADDRESS ORGANIZATIONAL NEEDS HEAD-ON

Implementing high-performing business processes usually requires making organizational adjustments. These changes are typically needed to better equip employees to fully leverage new tools, create entirely new roles, and fully operationalize new business capabilities. Key considerations for addressing organizational needs include:

Upgrading traditional roles. Successful post-implementation efforts often require enhancing traditional roles, including those of customer service representatives (CSRs), purchasing managers, and supply chain planners. CSRs require special training to use ATP methods for promising customer orders. They also must maintain and update master data, which should be measured as part of their overall performance evaluation.

Upgrading analytical skills and capabilities. Purchasing managers, for example, may need more sophisticated analytical skills to identify opportunities to consolidate spending. They may need advanced leadership and facilitation skills to forge joint initiatives across various business boundaries, for instance, to convince business units to reduce costs by giving up favorite suppliers to help reduce costs.

Recruiting for specialized roles. To exploit advanced supply chain planning tools, some companies have hired high-powered operations research experts or mathematics Ph.D.s to determine, for example, the best inventory stocking policies and select appropriate algorithms for reorder policies.

Adjusting job descriptions and pay scales. HCE's human resources team needed to revise job descriptions and compensation levels to ensure they could retain employees with valuable, newly acquired skills.

To be successful, the post-implementation organizational model needs to allow for upgrading traditional roles, creating new roles, applying change-management processes, and initial and ongoing training.

6. KEEP THE BUSINESS MISSION TOP OF MIND

Because of their complexity, ERP system projects are often fraught with IT configuration issues that, if unchecked, can confound the business mission. In post-implementation, individual business units often want to change their IT configurations or add instances that deviate from standard business processes. Customization, however, has a double cost: a one-time up-front implementation cost and the hidden costs when upgrading. In this case, customizations often have to be redone in the new version or discarded. Sticking to standard processes can prevent unnecessary complexity and mitigate later investments and support costs.

Consider MFG, a manufacturer of highly engineered products. MFG's business operations doubled in size through two acquisitions over two years. Even though its business units used the same ERP system, there were significant configuration differences that made it impossible to realize operational synergies. Two MFG divisions, A and B, represented configuration extremes. Division A had allowed its ERP system to be so heavily customized that an upgrade to the ERP software was nearly impossible, and a complete reimplementation was needed. In contrast, Division B allowed very little customization. While this approach met the aggressive implementation schedule, it led to numerous usability challenges. Over the years, hundreds of user-developed point applications were built in Microsoft Access or Excel, and many of them became critical components of business processes. Even though Division B's core ERP was highly standardized, the overall applications environment, including user-built applications, was far from standard and posed significant migration challenges.

Important steps to corralling IT configuration and preserving the business mission at MFG included:

Establishing guidelines for customization. MFG's ERP standardization team determined the best balance between usability and standardization. It considered many factors, including availability of new reporting tools, migration utilities for upgrades, and automated menus based on security profiles. Implementation guidelines were used for the ERP upgrade that achieved harmonization among the various business units. First, users could build their own reports from standardized data sources in a data warehouse, thus eliminating the need for hundreds of custom-programmed reports. Second, additional system configuration was allowed only if justified by a clear business need and if it didn't require the code to be modified. Finally, security profiles were designed to allow broader system use.

Enabling data mining. Users should have tools to manipulate rich and timely data offline so as not to limit their creativity or require special IT programming to create reports. Data mining and other creative uses of available ERP data typically are best done with a data warehouse application, which often isn't included in the core ERP. To help maximize value from the data warehouse, MFG trained end users and proactively recruited super users for this application.

Ensuring users have rights to needed data. While there may be valid reasons to limit access privileges, supply chain analysts, planners, and managers typically require broad-based access to supply chain performance data. For example, local pricing among global markets can be a sticky issue, but the need for global visibility regarding inventory levels and planned production is fundamental to an efficient supply chain.

The key to managing IT configuration complexity is to ensure that the right balance exists between the business driving the IT configuration and vice versa.

7. MANAGE IT INFRASTRUCTURE RELENTLESSLY

Robust infrastructure planning addresses network capacity planning for anticipated traffic volumes and how frequently the data is refreshed on computer servers. The goal is to ensure acceptable response times, uptime, and accessibility (to enable Web-based user access, for example). If the system disappoints or frustrates users, they're likely to become disenchanted and won't fully embrace the tools. In lay terms, even a highly tuned ERP engine will be unable to support users' needs if its pipes are too few or too small.

MFG, the engineer-to-order manufacturer, implemented an ERP system several years ago to achieve efficient inventory management, eliminate material shortages, and increase manufacturing capacity utilization. It cleaned up bills of materials (BOMs) and manufacturing routings and improved the processes for the master production schedule (MPS) and material requirements planning (MRP). Furthermore, inventory cycle counting was made more rigorous to improve inventory accuracy.

While limited-scope pilot projects were successful, the full system go-live wasn't. Once the systems were fully populated with all production data, the MPS and MRP processes ran for days instead of hours, bringing the operations to a halt. Analysis showed that the IT architecture couldn't handle the massive workload caused by the company's deep, multilevel bills of materials, complex routings, and the magnitude of distinct part numbers. Important steps in reforming MFG's infrastructure included:

Addressing both technical and business drivers. During the subsequent three months, the IT team rearchitected the hardware setup, tuned databases, and increased network bandwidth among several facilities. In parallel, the business teams simplified some routings, flattened some bills of materials, and streamlined planning algorithms to eliminate cases of unnecessary complexity.

Designing and conducting meaningful testing. Comprehensive stress tests were used to ensure success of the second go-live event. This resulted in acceptable system performance and allowed users to achieve gains defined in the business case justification.

Deploying monitoring tools If network scale and complexity warrant them. Complex networks require capable monitoring tools to assess the network's health and performance over time. These tools are sophisticated computers that help pinpoint and solve network problems at the outset of any infrastructure program and throughout its life cycle.

Using objective and quantitative performance measures. Just as key performance metrics are essential for monitoring business operations, they're equally important for monitoring IT operations that support the business. Even if IT operations are outsourced, companies can use performance metrics-e.g., response time, uptime/availability performance, or help desk support-to hold the provider accountable.

In the planning phases, due diligence ensures that the ERP and its supporting network will have the required capacity and performance to keep pace with anticipated business needs. In the post-implementation phase, system performance should be monitored at all times, using well-defined metrics, such as response time and uptime. Sophisticated tools are available for troubleshooting and resolving problems. As a company adds more users and functionality over time, the network's capacity must also be scaled up to sustain desired performance levels.

PUTTING THE ERP TO WORK

The seven key challenges we addressed represent the cumulative experiences of hundreds of companies. The proposed solutions serve as a program guide for companies that are dissatisfied with their existing ERP implementations and are evaluating new ERP systems, replacements, or upgrades. Top-performing companies often have less software customization and complexity, but a higher level of harmonization between processes and systems (see "What Makes an ERP System Implementation Successful?" on p. 44).

To be effective, ERP systems require constant support and maintenance, not just by the IT department but by the business itself. Support must include the right processes and organizational models, backed by the appropriate governance and championed by the executive team. Clearly, an ERP system isn't simply another complex solution to hand off to the IT department and to a few tech-sawy business users. If the ERP system is to deliver a consistent performance advantage, it's also the responsibility of senior business leaders to see it through. By properly addressing these management issues, companies can finally realize the promises made by ERP systems-and ultimately surpass expectations.

[Sidebar]

What Makes an ERP System Implementation Successful?

A 2005 PRTM study of 60 companies showed that having an advanced ERP system won't necessarily lead to better results (Quadrant Il in Figure 1). In fact, advanced systems tend to magnify process deficiencies. And while mature business processes are necessary for achieving desired business results, they are in themselves insufficient (Quadrant IV). To achieve repeatable results, companies must master business process management and IT tools (both software and hardware), thus establishing themselves in Quadrant I (mature processes and mature systems).

Even without advanced ERP systems, companies with mature business practices are 38% more profitable, have 22% less inventory, and achieve 10% better delivery performance than companies with less-mature business processes. Companies combining mature business processes with advanced ERP systems achieve a further 27% profitability advantage and as much as a 40% gain in performance across the full range of supply chain metrics, including delivery performance and inventory. These results aren't surprising since, for example, it can take weeks to manually consolidate demand data, compared to hours for a fully functional ERP system. Clearly, companies that manually aggregate demand data are making business decisions with stale information.

On the systems side, PRTM found that most companies still use less-mature ERP system solutions. More than 65% use functionally oriented legacy systems, such as standalone material requirements planning (MRP) systems and, in some cases, Excel spreadsheets. Some use ERP modules implemented as point solutions that lack overall enterprise planning data visibility. Since most of the companies studied are executing mature planning processes without sophisticated enabling systems, they rely too heavily on spreadsheets and people. This puts them at risk of underperforming in all key metrics.

 

Expert:  Chris Parker replied 4 years ago.
Thanks. I will complete the two questions before their respective deadlines.

Regards,
Chris
Customer: replied 4 years ago.
is work ready?
Expert:  Chris Parker replied 4 years ago.
I'll be posting in a couple of hours.

Regards,
Chris
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to the first question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Expert:  Chris Parker replied 4 years ago.

Download my response to the second question from the following link: Click.

Please accept the two answers separately.

Thanks,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Thanks, XXXXX XXXXX accepted both works. I have 3 questions coming you way.
Expert:  Chris Parker replied 4 years ago.
Thanks. I will look forward to the new questions.

Regards,
Chris
Customer: replied 4 years ago.

1. Why do heuristics and biases play a major role in the success or failure of an IT project? What specific kinds are the most influential factors, in general and for your specific organization? Must be at least 200-300 words (due tonight)

 

2. How does organization culture play a role in the successful reliance on the Joint Application Development (JAD) process to identify and gather business requirements? Would JAD work in your company? Explain why or why not. Must be at least 200-300 words (due Friday)

 

3. Based on Mitchell's (2007) article, explain how Musicland stores were converted to the Trans World Entertainment system in 90 days. Must be at least 200-300 words (due Sunday)

 

As usual, thanks for the great work.

Customer: replied 4 years ago.
any update yet?
Expert:  Chris Parker replied 4 years ago.
There were some issues with the site yesterday, so I couldn't see your post. I will answer your first question shortly.

Regards,
Chris

Edited by XXXXX XXXXX on 9/23/2010 at 4:42 PM EST
Customer: replied 4 years ago.
ok thanks
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to the first question from the following link: Click.

Please review and accept.

Also, please post Mitchell's (2007) article for the third question.

Regards,
Chris

Edited by XXXXX XXXXX on 9/23/2010 at 6:59 PM EST
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.

Abstract (Summary)

On March 1, as Trans World Entertainment Corp (TWE) prepared to acquire Musicland, TWE's CIO, John Hinkle, sat in on a due diligence meeting with the management of the bankrupt music store chain. His challenge: Integrate the antiquated point-of-sale (POS) systems in 335 stores owned by Musicland with the finance and replenishment systems that served TWE's existing 800-store business. Hinkle created several project teams to handle the transition. Working with NCR Corp and POS software supplier Epicor Software Corp to get all of the equipment staged and shipped to the stores on short notice was the biggest challenge, but the longest delays came from waiting for the installation of DSL or backup frame-relay services. Court Newton, director of store systems, says using in-house staffers was a win. By using internal staff, TWE no doubt took a productivity hit from having the teams on the road rather than doing their normal jobs.

» Jump to indexing (document details)

Full Text

(1327 words)
Copyright Computerworld, Inc. Oct 15, 2007

[Headnote]
Trans World Entertainment's IT team had just 30 days to integrate 335 Musicland stores with its backend systems. Here's how they did it. By Robert L. Mitchell

ON MARCH 1, as Trans World Entertainment Corp. prepared to acquire Musicland, TWE's CIO, John Hinkle, sat in on a due diligence meeting with the management of the bankrupt music store chain. His challenge: Integrate the antiquated point-of-sale (POS) systems in 335 stores owned by Musicland with the finance and replenishment systems that served TWE's existing 800-store business - and do it before the deal closed at the end of the month. As if that weren't enough, TWE management wanted all of the Musicland stores across the country to be networked and running on TWE's own POS system and all employees trained within 90 days.

Hinkle's team had been down this road before, having successfully integrated five other chains, ranging in size from 30 stores to 400 stores, in the past 10 years. But the 30-day window was a first. The fact that management even considered it says much about IT leadership, says Alex Cullen, an analyst at Forrester Research Inc. in Cambridge, Mass. "It shows that the executive team had a lot of confidence in the CIO."

TWE had few options. "That was all driven by Sarbanes-Oxley," says controller XXXXX XXXXX, noting that Musicland didn't have any documented controls for regulatory compliance in place. "We would have had to hire an outside audit firm, and that would have been very costly." The incremental investment required to move quickly was "not insignificant," Hinkle adds, but it paid for itself in the first month.

Hinkle created several project teams to handle the transition. They included representatives of TWE's financial, merchandising, and planning and allocation operations. Jim Razzano, director of software development, worked with Musicland's IT staff to map data from Musicland's mainframe to Albany, N.Y.-based TWE's back-end systems. Having a standardized data interface made the job easier.

"By mapping data into a proven interface, we greatly reduced the tune for testing and validation for processes," Razzano says. But developers still had to write some one-time load routines where data from the Musicland system couldn't be delivered in the proper format.

Transaction codes had to be mapped between financial systems, and getting the replenishment systems to service the new stores required inputting store configurations, capacities and inventory levels into TWE's system, including all of the stock-keeping unit codes for every product sold in Musicland stores.

All of Musicland's eight-digit SKUs had to be mapped to the 12-character universal product code format that TWE used. With 20,000 to 30,000 SKUs per store to deal with and 1.5 million SKUs in TWE's product database, "it was like an [extract, transform and load] project on a grand scale," Hinkle says.

The teams worked seven days a week until the deadline. The cutover took place on a Monday, when TWE began receiving daily batch uploads of store data from Musicland's mainframe. "It took two or three days to work the kinks out," says Hinkle, but the systems were tracking inventory, replenishing the acquired stores and generating reports that included the Musicland properties by week's end.

PROJECT MOVES TO STORE LEVEL

With the back-end systems running smoothly, Hinkle focused on getting the stores online and transitioned to new POS systems. Director of IT Operations XXXXX XXXXX arranged to beef up the back-end corporate systems to handle the extra load and was already working on installing in-store networks and broadband connectivity at each location. "The systems [in Musicland's stores] were so old that they had a modem on every register for credit card checks," he says.

Working with NCR Corp. and POS software supplier Epicor Software Corp. to get all of the equipment staged and shipped to the stores on short notice was the biggest challenge, but the longest delays came from waiting for the installation of DSL or backup framerelay services, which went right down to the wire. In many locations, broadband service simply wasn't available. "We ended up with 80 stores on frame," which, at 256Kbit/sec., was slower and more expensive than DSL, he says.

Simmons contracted out the networking job, but POS installations and system training were handled by 25 inhouse teams that included some store managers and district managers. Epicor staged the systems for the teams. "All they needed to do was take it out of the box, plug it in, and they were ready to go," says Diane Cerulli, director of product marketing, who was Epicor's project manager for the job.

Court Newton, director of store systems, says using in-house staffers was a win. "They had more skin in the game than independent contractors," he says. Newton spent the first six weeks making preparations, including bringing in the teams for a weeklong training before sending them into the field. "By far, this was the most wellorchestrated platform transition I've ever seen," says XXXXX XXXXX, a Musicland regional manager who participated in the installation training.

The teams then spent six grueling weeks on the road. "We changed everything: hardware, software, networks, policies and procedures. It was a relentless pace of execution for six weeks. There were no fallbacks," Newton says. Staffers had to work around problems such as damaged shipments, improperly staged equipment and incorrectly placed network jacks. Some 30 to 40 stores that didn't have either broadband or frame relay were temporarily set up with dial-up connectivity. "If there was a problem, we ran into it," Newton says.

All stores were online in 89 days - one day ahead of schedule. Once the last store came online in late June, batch uploads from Musicland's mainframe were turned off. "It was four months to total transition" from the time of the first initial meeting, Hinkle says. That's remarkable, says Cullen, adding that many integration projects get bogged down after the deal closes.

Having standardized, repeatable processes was the No. 1 key to success, Hinkle says. Consistency was also important. For example, while no two stores have the same topology, the network architecture is exactly the same.

Good relationships with TWE's vendors, including AT&T Inc., Epicor, IBM and NCR, were vital to keeping the project on track and costs in check. TWE regularly gave some of its business to key partners rather than forcing them to bid on every job. That paid off when Hinkle asked vendors to bend over backward to meet the 30-day window. "We paid very few premiums," he says. "When you work closely together, you can achieve rapid results."

Finally, TWE's "train the trainer" model and the use of internal staff for the field installation and training made a big difference. "It gave them handson experience," Hinkle says. The new system gives Cox profit and loss reports for his stores within seven days of closing - much faster than the 30 days the old system required.

TWE's approach is unusual, says Cullen. "If you want to move really fast, your normal inclination is to get professionals who are experienced," he says. By using internal staff, TWE no doubt took a productivity hit from having the teams on the road rather than doing their normal jobs. But leveraging the Musicland store management's familiarity with the staff probably helped smooth the training, Cullen says.

Cox says using managers as installers and trainers had another side benefit. "The strongest byproduct was the connection with other [managers]," he says. "Those relationships live on."

Expert:  Chris Parker replied 4 years ago.
Thanks. I will answer the second question today.

Regards,
Chris
Customer: replied 4 years ago.
ok thanks
Expert:  Chris Parker replied 4 years ago.

Download my response to the second question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to the third question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Thank you very much. I have three new questions that i would be sending to you in a few.
Customer: replied 4 years ago.

1. What is enterprise-wide analytics technology, and how does it play a part in understanding business processes? What are the challenges in rolling out a business intelligence tool? 200-300 words ( due wednesday)

 

2. What are some of the challenges associated with requirement elicitation? How does an iterative approach help that process? 200-300 words (due Friday)

 

3. Based on Perkin's (2007) article, list five reasons why projects fail. Provide an example of a project failure. Must be at least 200-300 words (due Sunday)

Expert:  Chris Parker replied 4 years ago.
Thanks for your new questions. I will complete them before your deadlines.

Regards,
Chris
Customer: replied 4 years ago.
thanks
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to the first question from the following link: Click.

Please review and accept.

Regards,
Chris

Edited by XXXXX XXXXX on 9/29/2010 at 7:32 PM EST
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.

Perkin's (2007) article

 

12 Things You Know About Projects but Choose to Ignore Bart Perkins. Computerworld. Framingham: Mar 12, 2007. Vol. 41, Iss. 11; pg. 34, 1 pgs

Abstract (Summary)

Over the past 10 years, virtually every major IT publication has printed articles on why large projects succeed or fail. Despite all the excellent advice available, more than half of the major projects undertaken by IT departments still fail or get canceled. Projects fail because people ignore the basic tenets of project success that they already know. Here are some of the common reasons for failure: 1. an ineffective executive sponsor, 2. a poor business case, 3. invalid business case, 4. very big project, 5. a lack of dedicated resources, 6. eyes off the suppliers, 7. unnecessary complexity, 8. cultural conflict, 9. no contingency, 10. long projects without intermediate products, 11. betting on a new, unproven technology, and 12. an arbitrary release date.

» Jump to indexing (document details)

Full Text

(766 words)
Copyright Computerworld, Inc. Mar 12, 2007

THERE is no mystery as to why projects succeed or fail; people have been writing about effective project management for millennia. More than 2,000 years ago, Sun Tzu described how to organize a successful, highly complex project (a military campaign) in The Art of War. XXXXX XXXXX' classic book, The Mythical Man-Month, offers management advice targeted at running large IT projects. The U.K. National Audit Office recently published an excellent guide to delivering successful IT-enabled business change (www.nao.org.ufc/ publications/nao_reports/ 06-07/060733es.htm). Over the past 10 years, virtually every major IT publication has printed articles on why large projects succeed or fail.

Despite all the excellent advice available, more than half of the major projects undertaken by IT departments still fail or get canceled. Stuart Orr, principal of Vision 2 Execution, reports that less than 20% of projects with an IT component are successful, with success defined as being delivered on time and on budget while meeting the original objectives.

We know what works. We just don't do it.

Projects fail because people ignore the basic tenets of project success that we already know. Here are some of the common reasons - and there are many - for failure:

An ineffective executive sponsor. A weak or, even worse, nonexistent executive sponsor almost guarantees business project failure. Under weak executive leadership, all projects become IT projects rather than business initiatives with IT components. Since the 1980s, research has consistently found that effective executive sponsorship and active user involvement are critical to project success.

A poor business case. An incomplete business case allows incorrect expectations to be set - and missed. Many business cases describe business benefits in far-too-broad terms. Goals and benefits must be measurable, quantifiable and achievable. (See "Business cases: What, Why and How" Computerworld, June 13, 2005.)

The business case is no longer valid. Marketplace changes frequently invalidate original business assumptions, but teams often become so invested in a project that they ignore warning signs and continue as planned. When the market changes, revisit the business case and recalculate benefits to determine whether the project should continue.

The project is too big. Bigger projects require more discipline. It's dangerous for an organization to undertake a project five or six times larger than any other it has successfully delivered.

A lack of dedicated resources. Large projects require concentration and dedication for the duration. But key people are frequently required to support critical projects while continuing to perform their existing full-time jobs. When Blue Cross attempted to build a new claims system in the 1980s, nearly 20% of its critical IT staffers were simultaneously assigned to other projects. The claims initiative failed. Project managers who don't have control over the resources necessary for their projects are usually doomed.

Out of sight, out of mind. If your suppliers fail, you fail, and you own it. Don't take your eyes off them.

Unnecessary complexity. Projects that attempt to be all things to all people usually result in systems that are difficult to use, and they eventually fail.

Cultural conflict. Projects that violate cultural norms of the organization seldom have a chance. The FBI's Virtual case File was designed to share information in a culture that values secrecy and rarely shares information across teams. Moreover, FBI culture views IT as a support function and a "necessary evil" rather than an integral part of the crime-solving process. The project violated multiple cultural norms and met with significant resistance. The Virtual case File was finally killed after costing more than $100 million.

No contingency. Stuff happens. Projects need flexibility to address the inevitable surprises.

Too long without deliverables. Most organizations expect visible progress in six to nine months. Long projects without intermediate products risk losing executive interest, support and resources.

Betting on a new, unproven technology. Enough said.

An arbitrary release date. Date-driven projects have little chance of success. Will we ever learn to plan the project before picking the release date?

See anything new here? That's exactly my point.

Next time, increase your chances for success by avoiding these common project pitfalls. Use the above list (and other industry guidelines) to evaluate your project. If you see too many signs of danger, cut your losses and either restructure the project or kill it.

Talk to experienced project managers and read project management literature to review what works and what doesn't. Though, in fact, you already know.

WANT OUR OPINION?

For more columns and links to our archives go to: www.computerworld.com/columns

[Author Affiliation]
BART PERKINS is managing partner at Louisville, Ky.-based Leverage Partners Inc., which helps organizations invest well in IT. Contact him at BartPerkins@ LeveragePartners.com.

Expert:  Chris Parker replied 4 years ago.
Hi!

Thanks for posting the article. Download my response to the second question from the following link: Click.

Please review and accept.

Regards,
Chris

Edited by XXXXX XXXXX on 9/30/2010 at 6:52 PM EST
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to the third question from the following link: Click.

Please accept this and the previous answer separately.

Regards,
Chris
Customer: replied 4 years ago.

I need the second question redone please.

 

I would be interested in reading your thoughts / opinions on this question - perhaps you can supplement your response with an example. Thanks

Customer: replied 4 years ago.
And the third question also. Thanks
Expert:  Chris Parker replied 4 years ago.
I will add an example to the second question.

I already included the example of Ford with a reference in the third question. What specifically do you need for the third question?

-Chris

Edited by XXXXX XXXXX on 10/2/2010 at 11:35 AM EST
Expert:  Chris Parker replied 4 years ago.

I added an example to the second answer. Please download the modified answer from the following link: Click.

I hope that helps.

-Chris
Customer: replied 4 years ago.
if you don't mind could you come up with you own thoughts / opinions on the second question? Thanks for all your help.
Expert:  Chris Parker replied 4 years ago.
---

Edited by XXXXX XXXXX on 10/2/2010 at 12:54 PM EST
Expert:  Chris Parker replied 4 years ago.
Well, the response I gave for the second question are my thoughts and opinions.

I think what your instructor is asking is not to modify the original answer, but to provide a continuation through an additional follow up question ....

Download the additional information from the following link: Click.

I hope this helps.

-Chris

Edited by XXXXX XXXXX on 10/2/2010 at 12:56 PM EST
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Thanks alot for you time and help.
Expert:  Chris Parker replied 4 years ago.
You are welcome. Thanks for accepting the second answer. I was wondering if you needed more help on the third (perkin's article) answer.

Regards,
Chris

Edited by XXXXX XXXXX on 10/4/2010 at 2:45 AM EST
Customer: replied 4 years ago.

I have Three new work for you please.

 

 

1. What are the differences between the human-centered and user-centered approaches? Do you agree that human-centered approach is more effective? Explain why or why not. 200-300 words (due wenesday)

 

2. Describe the need for security measures in IT organizations and information systems. Consider potential risks as well as legal and ethical considerations for protecting data. 200-300 words (due Friday)

 

3. Based on the Geoff Keston (2009) article Scrum Project Management Techniques, how has Scrum influenced the design of Web-based applications? What are the implications of agile development on the traditional Systems Development Lifecycle (SDLC)? Must be at least 200-300 words (due sunday)

Expert:  Chris Parker replied 4 years ago.
Thanks. I will complete them before their deadlines.

Regards,
Chris
Expert:  Chris Parker replied 4 years ago.

Download my response to the first question from the following link: Click.

Please review and accept.

Regards,
Chris

Edited by XXXXX XXXXX on 10/6/2010 at 6:14 PM EST
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Thanks for the great work. If you don't mind, could you please add some examples? I would go ahead and accept the work already.
Expert:  Chris Parker replied 4 years ago.
Ok. I will post in an hour.

-Chris
Expert:  Chris Parker replied 4 years ago.
Download the examples from the following link: Click.

-Chris
Customer: replied 4 years ago.
ok thanks. is the second work ready?
Customer: replied 4 years ago.
Expert:  Chris Parker replied 4 years ago.
I will post the second answer in a couple of hours.

-Chris
Expert:  Chris Parker replied 4 years ago.

Download my response to the second question from the following link: Click.

Please review and accept.

Regards,
Chris

Edited by XXXXX XXXXX on 10/8/2010 at 9:10 PM EST
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Thanks
Expert:  Chris Parker replied 4 years ago.
You are welcome. I will post the third answer today.

Regards,
Chris
Expert:  Chris Parker replied 4 years ago.

Download my response to the third question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.

1. Is testing a distinct phase of a project, or does it come into play during other phases? Explain your answer. How might tools help the quality and effectiveness of testing? 200-300 words (due tonight if possible please)

2. Based on Walsh's (2007) article, explain why end-user satisfaction should be used in addition to return on investment (ROI) when measuring new systems. 200-300 words (due Friday)

3. Based on Davidson and Kumagai's (2008) article, list three advantages and risks associated with open source software. 200-300 words (due sunday)

Expert:  Chris Parker replied 4 years ago.
Thanks for the new questions.

Can I answer the first question by 9 AM EST tomorrow?

Also, could you please post articles for the 2nd and 3rd questions?

Regards,
Chris
Customer: replied 4 years ago.
9AM today would be great.
Expert:  Chris Parker replied 4 years ago.
I will be posting in an hour. Sorry for the delay.

Meanwhile, could you please post the articles for questions 2 and 3?

Regards,
Chris
Customer: replied 4 years ago.

Walsh's (2007) article

 

Proving the value of IT is always tricky. You invest thousands-if not millions-of dollars in hopes of creating greater efficiencies that reduce cost or opening opportunities for new lines of revenue. Quantifying those values into an ROI statement that has some basis in reality-difficult, but priceless.

Some people would say the equation is quite simple: Take the amount of money you spent on a new system, subtract the licensing and maintenance cost of the old system, add the cost (reduced, ideally) of operating and maintaining the new system, then subtract the amount of time and money saved through eliminated processes.

ROI, though, is a moving target. How do you prove value of IT after the acquisition and implementation of hardware and software? How many ROI models predicted a specified amount of savings only to have those savings never materialize? How many systems implementations were supposed to simplify processes only to have equally complicated processes emerge? Funny how the ROI calculation the vendor shows you on that fancy worksheet is never what you get.

What IT departments and service providers often forget is that the ultimate value proposition isn't measured in dollars and cents but in the user experience. The success of services such as Google and Saleforce.com is based very much on their ease of use and delivery of promised capabilities. The success of Apple's iPod, iPhone and iTunes is founded in superior ease of use and user experience. Companies struggling with their products and services (did someone say Microsoft Vista?) can often find their challenges rooted in poor user experiences and unmet expectations.

Consider the example of BT: When the British telecommunications giant began offering help desk services to its business customers, particularly SMB clients, customer satisfaction ratings initially hovered around a dismal 20 percent. The problem wasn't with the ultimate resolution, but with the delivery of the service. Users would get frustrated after being passed from one technician to another.

BT deployed Citrix Online's Go-To-Assist, a remote access tool that gives admins the ability to look into a client regardless of its location or connection. BT combined the power of Citrix Online's tool with its ability to transparently escalate to different levels of support-the user would get the problem resolved in a single session without knowing multiple technicians were poking around the machine. The result was an astounding improvement in customer satisfaction scores-97 percent-and a significant improvement in customer renewals.

Citrix Online is drinking its own Kool-Aid and taking the power of Go-To-Assist to another level by combining the Net Promoter model to its internal support services. Net Promoter measures customer satisfaction with the aim of getting everyone to the point at which they would actively promote your product or service. Rather than simply looking at its IT department from an expense and savings perspective, Citrix Online is measuring and rewarding its IT staff based on how well the IT department serves its users.

"It's about what metrics and models you are going to reward and encourage, and how you can prevent gaming the system," says Citrix Online president Brett Caine. "It's about being relative to the experience; metrics will tell you volume and time, but don't tell you about experiences."

Many enterprises are already looking for IT budget justifications that go beyond simple cost reductions and operational savings; they're looking to IT to improve their businesses. Improving and measuring end user satisfaction can show the power of smart IT investments. Improving end user satisfaction leads to improved productivity, which in turns leads to-we hope-more revenue and growth.

Isn't that what all businesses want from IT?

Expert:  Chris Parker replied 4 years ago.

Download my response to the first question from the following link: Click.

Please review and accept.

Regards,
Chris

Edited by XXXXX XXXXX on 10/14/2010 at 4:03 PM EST
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Thanks
Expert:  Chris Parker replied 4 years ago.
You are welcome.

Download my response to the second question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.

Davidson and Kumagai's (2008) article

 

On June 29, 2007, the release of the Apple iPhone overshadowed most other events in the in the information technology industry. For developers, distributors, and users of computer software, it was a red letter day for a different reason: the release of version 3 of the General Public License (GPLv3). Because more "free" software is distributed under various versions of the GPL than under any other form of license that ever existed, the final form of GPLv3 and the extent to which it is adopted by software developers are hugely important.

As expected, GPLv3 contains some very significant changes that have the potential for widespread impact on the software industry and its market. In fact, virtually every noteworthy development in the open source arena should be considered in the light of GPLv3.This article begins with a primer on open source software, examines the release of GPLv3, and discusses some of the significant recent developments in the software industry that give context to important changes in this new version of the GPL.

A Primer on Open Source Software

The Definition of Open Source Software

Open source1 software is a part of the software ecosystem that affords software developers and users an alternative style of software development and distribution. It coexists in that environment along with a broad spectrum of other development and distribution methods, including public domain software, freeware, shareware, proprietary commercial software, and even vaporware. Open source software is found in development tools, utility code, operating systems, and applications.2

Finding a precise definition of "open source" software can be tricky. A recent Google search produced no less than 10 definitions from various sources, each of which gives a somewhat different perspective and slant.3 As discussed later, different members of the open source community differ in their ideas and goals for open source software development; however, one helpful and concise definition is:

Open Source Software is software for which the underlying programming code is available to the users so that they may read it, make changes to it, and build new versions of the software incorporating their changes. There are many types of Open Source Software, mainly differing in the licensing term under which (altered) copies of the source code may (or must) be redistributed.4

In comparison with commercial software,5 open source software differs in a number of ways. Commercial software is most often distributed only in binary, executable form (sometimes referred to as "closed source" software), and its developers reserve to themselves the ability to know the source code, to modify the software, to distribute the software, and to authorize others to do those things. It is not unusual for commercial software developers to refer to their software source code as the "crown jewels" of the company and to jealously guard it against disclosure to others. On the other hand, "open source software" is distinguished from commercial software by the availability of source code to everyone who receives a license to use the software and, in many cases, by a broad authorization to modify and redistribute it in both binary and source code form.

Over the past several years, market forces have operated to bring about numerous variants of commercial and open source software, all of which have their own distinguishing features. Along that spectrum we can now find:

1. Commercial "closed source" software, for which the source code is not available to anyone other than the original developer;

2. Commercial "closed source" software, for which the source code is licensed to authorized users under strict confidentiality terms for their own use in maintaining and modifying the software;

3. "Shared source" software, for which the source code is made available to licensees for limited purposes and subject to restrictions on use and disclosure;

4. "Community source" software, for which the source code is available to a limited community of users for broad purposes but is still often subject to restrictions on use, modification, and distribution; and

5. True "open source" software, for which the source code is made available for "free" use, modification, and distribution, but the license for which may be subject to conditions or reciprocal obligations that make it unsuitable for commercial use (more on that later).

Market forces are causing these scenarios to overlap and the distinctions between the various genres of software to become somewhat blurred, but they are still sufficiently vivid to generate a lot of debate. Much of that debate has been engendered by the "free software" movement, members of which advocate for free software with almost religious fervor. In fact, as discussed later in this article, GPLv3 represents an effort to keep the development and use of open source software in alignment with the moral and philosophical ideals and pragmatic goals espoused by certain factions of the open source community.

Significant practical differences between commercial and open source software include the ways in which they are developed and distributed, their relationship to standards, and the so-called total cost of ownership. While a detailed discussion of these things is beyond the scope of this article and has been the subject of several multi-day conferences over the past few years, it is helpful to make some general observations.

Development of Open Source Software

Open source software is typically developed by very talented individuals or by informal groups or communities of programmers who want to solve a technical problem and share the results with the rest of the world. Some advocates claim that the open source model generates a higher level of innovation, and some supporters claim that the open source model produces software that is technically equal or even superior to competing commercial products. Because of the way it is distributed, there may be a great variety of any given open source software product, including numerous derivatives that are relatively undocumented and may behave differently in subtle or not-so-subtle ways.

Commercial software is generally designed and developed in response to market demand, or at least in response to a perceived market need. Its features tend to be market driven and user driven. Development is relatively structured and disciplined, and the resulting products tend to be relatively well documented, quality tested, and supported. Features of commercial software generally are the subject of long-term, market driven evolution. Of course the user pays for all these things, whether it wants them or not.

Standards

Two things that distinguish commercial software are the ability of its owners to maintain intellectual property rights in its features and control its specifications. This can include application program interfaces (APIs) or other aspects that affect interoperability. Because of this, there has been a tendency among some open source advocates to equate "open source" with "open standards." One could argue, however, that there are two primary ways in which "standards" come into being. One is by widespread adoption in the market (de facto standards). The second is through standards-setting organizations.

In reality, virtually all of the important "open standards" have been developed by consortia of representatives of private industry. Another reality is that open source software, by its very nature, tends to become non-standard because of the relative ease and freedom of making modifications. In fact, commercial distributors of open source software often deliberately modify it to distinguish themselves. So, for example, while IBM Corporation, The SCO Group, Inc., Hewlett-Packard Company, and Sun Microsystems, Inc., all distribute derivatives of the UNIX operating system, they are not all the same and do not represent a "standard".

Total Cost of Ownership

The term "open source" is commonly equated with the term "free software," and many open source advocates argue that a key advantage of the open source solution is the ability to acquire software without paying a license fee. As the Free Software Foundation says, however, "Tree software' is a matter of liberty, not price. To understand the concept, you should think of Tree' as in 'free speech,' not as in 'free beer.'"6 The software we typically think of as "open source" is indeed more or less free at the acquisition stage, that is, it can be acquired, copied, and used without charge. When one considers the "cost" of software, however, it is important to consider the total cost of acquisition and use, that is, the "total cost of ownership."

When considering the cost of "free" software, it is important to have in mind such things as the cost of modifying, maintaining, and supporting the software, the cost of required user and technical documentation, the cost of quality assurance testing, and the costs of customization, implementation, defect correction, ongoing development, and dealing with security issues. The cost of training should be considered at two levels-training of technical personnel and training of end users-because the available labor pool will more likely be possessed of knowledge and skills developed through experience with commercial products and may lack the expertise needed to support and use "free" substitutes without additional, specialized training.

Customer: replied 4 years ago.

Open Source Licensing Schemes

Commercial software developers use licensing schemes that:

1. Exploit their intellectual property in ways that will generate enough revenue to pay their research, development, marketing, and support costs and leave something left over for profit (sometimes a small profit, and sometimes a monumental profit); and

2. Limit scope of use, limit transferability, prohibit reverse engineering, limit warranties, and limit liability.

With commercial software, users pay for whatever benefit they get and possibly for some benefit that they do not need. Still, the fundamental economics of commercial software licensing is that each party gets an anticipated benefit at an anticipated cost.

The same is true for open source software, but the licensing model is very different. Open source does not necessarily mean that one can do whatever it wants to do with the software, although that is sometimes the case. There are numerous different forms of own source software license, ranging from those that are highly permissive to some that are highly restrictive.7 For example, the BSD (an acronym for Berkeley Software Distribution) model is highly permissive and permits taking the software and doing pretty much whatever you wish with it, including modification and distribution of free or commercial derivatives, provided that each copy contains a specified form of attribution that includes a copyright notice and a disclaimer of warranties and liability. The UNIX operating system is an example of software that was distributed under the BSD license (among other licensing models), and Sun Microsystems' Solaris operating system is an example of a proprietary derivative of BSD UNIX.

At the other end of the spectrum are licensing models that permit free use, modification, and redistribution of the software, but are highly restrictive. The most frequently encountered examples are the Free Software Foundation's General Public License (GPL) and Lesser General Public License (LGPL), which permit modification and distribution of free derivatives but which preclude the distribution of closed source derivatives. The Linux operating system is an example of an open source derivative of UNIX distributed under the GPL, and a number of popular software development tools such as the GNU C compiler are also distributed under the GPL. It is the GPL that has received the most attention and has caused the most sleepless nights among commercial software developers, investors, and those involved in mergers and acquisitions of software companies.

The Open Source Community

There are two primary factions within the open source community. The first is the "free software movement." The "free software movement" is a philosophical and social movement that aims to change the rights of software users. The principal architect of the "free software movement" is the Free Software Foundation (FSF), which was founded in the mid-1980s by computer scientist Richard Stallman and which proclaims its primary missions to be:

1. Promoting computer users' right to use, study, copy, modify, and redistribute computer programs;

2. Promoting the development and use of free software and free documentation;

3. Spreading awareness of the ethical and political issues of freedom in the use of software;

4. Developing new free software; and

5. "[M]aking that software into a coherent system that can eliminate the need to use proprietary software."8

In the words "free software," the FSF defines "free" by saying:

When we call software "free," we mean that it respects the users' essential freedoms: the freedom to run it, to study and change it, and to redistribute copies with or without changes. This is a matter of freedom, not price, so think of "free speech," not "free beer."9

Opposition of software patents is a particular theme of the FSF: "Software patents are a vicious and absurd system that puts all software developers in danger of being sued by companies they have never heard of, as well as by all the mega-corporations in the field," Stallman explained. "Large programs typically combine thousands of ideas, so it is no surprise if they implement ideas covered by hundreds of patents. Mega-corporations collect thousands of patents, and use those patents to bully smaller developers. Patents already obstruct free software development," he added.10

The second faction of the open source community is the "open source movement," which promotes the efficiency and better software development model of open source development, rather than moral or ethical ideals of freedom. It should be noted that the FSF opposes the term "open source" being applied to what it refers to as "free software." The FSF also dislikes the pragmatism of the "open source movement," as FSF fear that their ideals of freedom and community are threatened by compromise. "For the Open Source movement, non-free software is a suboptimal solution. For the Free Software movement, non-free software is a social problem and free software is the solution."11

While both sides may agree on some practical aspects, the two sides represent very different schools of thought, and both are vehement in their rhetoric. Regarding the "open source movement," Stallman states that, while the movement is not the enemy, he does not want the Open Source movement's efficiency ideals to get credit for what he sees as progress driven by the FSF's ideals of software freedom. Linus Torvalds, the creator of Linux and a representative voice of the Open Source movement, believes that the FSF's focus on freedom is misplaced:

I think that "freedom" is fine, but we're not exactly talking about slavery here. Trying to make it look like we're the Abraham Lincoln of our generation just makes us look stupid and stuck up. I'd much rather talk about "fairness" and about issues like just being a much better process for generating better code, and having fun while doing so.12

History of the GPL

The GPL was born out of the FSF's free software initiative, the GNU project. The GNU project developed a free operating system that eventually has formed part of many current Linux distributions. The FSF describes the purpose of the project:

The GNU Project was conceived in 1983 as a way of bringing back the cooperative spirit that prevailed in the computing community in earlier days-to make cooperation possible once again by removing the obstacles to cooperation imposed by the owners of proprietary software.13

Richard Stallman developed the GPL license as a method of distribution of GNU software. In a paper entitled "The GNU Operating System and the Free Software Movement," originally published in the book "Open Sources," Stallman wrote regarding the philosophy behind the GPL license:

Shortly before beginning the GNU project, I heard about the Free University Compiler Kit, also known as VUCK. (The Dutch word for "free" is written with aV) This was a compiler designed to handle multiple languages, including C and Pascal, and to support multiple target machines. I wrote to its author asking if GNU could use it.

He responded derisively, stating that the university was free but the compiler was not.

The goal of GNU was to give users freedom, not just to be popular. So we needed to use distribution terms that would prevent GNU software from being turned into proprietary software. The method we use is called "copyleft."

Copyleft uses copyright law, but flips it over to serve the opposite of its usual purpose: instead of a means of privatizing software, it becomes a means of keeping software free.

The central idea of copyleft is that we give everyone permission to run the program, copy the program, modify the program, and distribute modified versions-but not permission to add restrictions of their own. Thus, the crucial freedoms that define "free software" are guaranteed to everyone who has a copy; they become inalienable rights.

For an effective copyleft, modified versions must also be free. This ensures that work based on ours becomes available to our community if it is published. When programmers who have jobs as programmers volunteer to improve GNU software, it is copyleft that prevents their employers from saying, "You can't share those changes, because we are going to use them to make our proprietary version of the program."

The requirement that changes must be free is essential if we want to ensure freedom for every user of the program. The companies that privatized the X Window System usually made some changes to port it to their systems and hardware. These changes were small compared with the great extent of X, but they were not trivial. If making changes were an excuse to deny the users freedom, it would be easy for anyone to take advantage of the excuse.

A related issue concerns combining a free program with non-free code. Such a combination would inevitably be non-free; whichever freedoms are lacking for the non-free part would be lacking for the whole as well. To permit such combinations would open a hole big enough to sink a ship. Therefore, a crucial requirement for copyleft is to plug this hole: anything added to or combined with a copylefted program must be such that the larger combined version is also free and copylefted.

The specific implementation of copyleft that we use for most GNU software is the GNU General Public License, or GNU GPL for short. We have other kinds of copyleft that are used in specific circumstances. GNU manuals are copylefted also, but use a much simpler kind of copyleft, because the complexity of the GNU GPL is not necessary for manuals.14

Commercial Software Industry Concerns

The greatest concern for commercial software developers and distributors is the possibility that commercial software can become subject to the GNU license terms if it is deemed to be "based on," "added to," "combined with," "derived from," or a "modified version of" software distributed under one of the GNU licenses. The problem is that except in extreme cases, it is difficult to know with certainty exactly what these terms mean as used in the GNU licenses, and it is very easy for undisciplined programmers to tread the gray areas that could subject their employers' software to the "copyleft" scheme. The most feared consequence is that their software could thus become subject to conditions or obligations that it be distributed free of charge with broad permissions to modify and redistribute and a requirement that the source code be made freely available to all licensees.

Much of the software distributed under the GPL is useful for providing needed functionality in larger systems, and programmers working on closed source proprietary projects are sometimes tempted to simply patch them in, rather than to create equivalent functionality from scratch. In addition, many useful and popular programming and development tools are distributed under the GPL or LGPL, and some of those tools inject pieces of themselves into the software that they are used to produce. In addition, it has been suggested that merely writing software to work with a particular GPL component might be enough to render the new software subject to the GPL.15 While it is fairly clear in some cases that the use of GPL code to build a new product would render the resulting product subject to the GPL, there are many instances in which that is not at all clear one way or the other. The fear that proprietary software can become subject to these open source license terms by the inadvertent inclusion of a small piece of open source code has prompted some to refer to GPL/LGPL code as "viral."

The Release of GPLv3

The release of GPLv3 was obscured by the release of Apple's iPhone (which occurred on the same day), but is a significant landmark in the software industry for a couple reasons. First, the GPL license covers a solid majority of open source software. According to one source, around two-thirds of all open source projects use the GPL.16 The FSF estimates the usage of GPL at nearly three-fourths of all open source projects. Second, GPLv3 is the first new version of the GPL since the release of GPLv2 16 years ago. This section deals with the changes in version 3, the technical community's reaction to the release, and license compatibility.

Changes in GPL Version 3

According to the authors of GPLv3, the new version was necessary to deal with some of the "new developments" facing the open source community, such as software patents, "tivoization," and "Treacherous Computing."17 The changes came about through months of public comment and committee work moderated by the FSF. Among the differences from GPLv2, five are perhaps the most significant:

1. An explicit patent grant by the distributor of the software to limit the effect of software patents on the free distribution and use of the software.

2. Clauses created in response to cross-licensing agreements that extend patent authorizations to all end-users beyond those specified in the agreement. The license grandfathers in the Microsoft/Novell agreement, but otherwise blocks future agreements of that nature.

3. Anti-"tivoization" measures to ensure that the "owner of a device using GPL software can change the software."18 This measure is intended to prevent companies from using open source software in a device that will not run the software if it is altered.

4. GPLv2 was somewhat ambiguous with respect to whether a "modified" version of the subject software was the same as a derivative work under copyright law. GPLv3 clarifies the "derivative work" issue by defining a "modified version" as a derivative work under copyright in accordance with "local law."

5. GPLv3 contains a broad definition of "corresponding source" that includes all code needed to generate, install, run, and modify, including certain shared libraries and dynamically linked subprograms.

Reaction to GPLv3

Customer: replied 4 years ago.

The reaction of the technical community to the release of GPLv3 has been mixed. Outside of the FSF itself, which immediately brought 15 GNU programs under the GPLv3 as of its release date, some projects and corporations have shown support for the new version of the license. The Samba project, an open source project that is home to a popular file and print server program, has announced that it will adopt the new license. IBM has indicated that it will also begin to produce projects using GPLv3:

GPL 3 code will be flowing from IBM . . .We'll tell our customers we're fine with it . . . As with any consensus process, you don't get everything you asked for. But we got listened to. What came out is absolutely a commercially viable license. (Dan Frye, vice president of IBM open systems development.)19

Other members of the technical communities have expressed reservations. Linus Torvalds, the creator of Linux, has expressed doubt about whether key Linux components would migrate to version 3 from version 2. Torvalds most strongly disagrees with the anti-"tivoization" provision:

[GPLv3] basically says, "We don't want access just to your software modifications. We want access to your hardware, too." . . . I don't think it's my place as a software developer to judge how hardware works around it.20

Torvalds may be willing to compromise, however, if Sun releases Solaris (which, like Linux, is an open source operating system based on Unix) under GPLv3:

I don't think the GPLv3 is as good a license as (GPL) 2, but on the other hand, I'm pragmatic, and if we can avoid having two kernels with two different licenses and the friction that causes, I at least see the reason for GPLv3.21

Sun Microsystems has been supportive in the media of the GPLv3 release. Sun has released software under GPLv2, but it has not yet indicated that any projects will be moved to version 3. A recent article quoted a Sun executive:

Sun, which selected GPLv2 to govern Java and, more unusually, the UltraSparc T1 processor design, is still evaluating the license . . . [GPLv3 is] a strong and market-changing document.22

MySQL, the developer of a popular open source database, is taking a wait-and-see approach to gauge whether others in the market will adopt the license.

We're happy about many changes in [the GPL] text . . . . What still remains to be seen is the adoption. GPLv3 is still something people are asking questions about. Our logic is that we don't want to be those that answer those very first questions.23

GPLv3 License Compatibility

Like the GPLv2, the GPLv3 is not compatible with many other open source licenses. This means that a component licensed under GPLv3 cannot coexist within the same program with a component covered under an incompatible license. Notably, the list of open source licenses that are incompatible with GPLv3 includes GPLv2. Richard Stallman addressed this theme:

GPLv2 will continue to be a valid license, although GPLv2 and GPLv3 remain incompatible . . . .This is because both GPLv2 and GPLv3 are copyleft licenses: each of them says: "If you include code under this license in a larger program, the larger program must be under this license too."24

Thus, if one component of a program is licensed under GPLv3, the whole program must be licensed under GPLv3. What constitutes a "program" for this purpose is not at all clear. An operating system, for example, is comprised of different components, the kernel (the central component of the operating system), applications, system utilities, and libraries. Must each of these components carry the same license?

One immediate concern is that the central component of the Linux operating system, the kernel, is licensed under GPLv2, while other components of Linux are being released under GPLv3. As stated above, Linus Torvalds believes that GPLv2 is a better license than GPLv3 and thus far has indicated that GPLv2 will continue to be used for the kernel.

The FSF responded to the incompatibility issue within Linux by expressing the view that many components of the operating system are separate programs that can coexist under different licenses.25

If the two programs remain well separated, like the compiler and the kernel, or like an editor and a shell, then you can treat them as two separate programs-but you have to do it properly. The issue is simply one of form: how you describe what you are doing.26

The Linux license compatibility question may be avoided altogether in the future if the Linux kernel migrates to GPLv3. As stated above, Mr. Torvalds has indicated that if Sun releases Solaris under GPLv3, then for pragmatic reasons he might support the adoption of GPLv3 for the Linux kernel as well.

Management of Open Source Software in the Development Process

A software developer must be careful to ensure the intellectual property integrity of its products. First, developers must show respect for the intellectual property rights of others and ensure that they do not infringe the copyright rights of open source authors by exceeding the licenses or other authorizations that accompany most open source tools and components. Second, developers need to protect their own intellectual property and the value of their own assets, operations, and business plans against the unwelcome consequences that can follow from the use of certain open source components in their systems and products.

Because of the legal and licensing issues presented by open source software, commercial developers must implement appropriate policies and practices concerning the use of open source tools and components by their engineers, must monitor and audit such things, and must thoughtfully decide how to license their own products they are releasing.

Tracking Open Source Software Usage in the Software Development Process

The software development process is often a large and complex undertaking requiring a great deal of coordination. One of the areas that management must coordinate is the use of open source and other third-party software. This is especially true when individual developers can integrate open source components into the software that they create. Often this takes place for temporary or testing purposes with the sincere (but sometime later forgotten) intention of replacing open source components prior to commercial release.

Fortunately, there now are automated tools that can track usage of open source software in the development process. Examples of such software tools are Black Duck's protexIP and transactIP and Palamida's IP Management and IP Amplifier. During the software development process, protexIP and IP Management can be used to proactively track and manage usage of open source or other third party software. In a code review or due diligence situation, transactIP and IP amplifier can be used to scan existing code for usage of open source or other third-party software. Both companies also make plug-ins for Integrated Development Environments (IDEs), or programs that can run within programming software, to track usage of open source components virtually in real time.

These tools rely on proprietary databases of open source code, "signatures," and "fingerprints" to identify suspected open source components. They can also scan a code base for prescribed character strings to identify content for which there may be no code, signature, or fingerprint in their libraries. These tools do not entirely replace the manual tracking or code review processes, but they can save huge amounts of time and expense by helping to identify specific content that warrants manual study from within code bases that often contain many terabytes of material.

With respect to GPLv3 and LGPLv3, Palamida also maintains an online database that tracks open source projects and contains information on the types of licenses used by each project. If a company's developers are using or considering use of tools or other materials from an open source project, it can look to the Palamida database for an initial sense of what the legal implications of such use might be.27

Remediation of Unintended or Unauthorized Use of Open Source Software

If a software developer discovers that GPLv3 licensed components are being used inadvertently or without authorization, remediation steps should be taken before deployment or conveyance of its software. The timing and nature of remediation will generally depend on what is uncovered and the client's business priorities, for example, the company's business plan for the software. Depending on the nature and extent of the particular issue (s) to be addressed, remediation can take days or months.

Remediation may simply be a matter of taking steps to comply with the requirements of an attribution license or the purchase of a commercial license from an open source author. On the other hand, it could require the development of replacements for open source "copyleft" code by one or more suitably qualified teams of independent developers operating under a formal independent development protocol. Guiding a company through a code remediation process usually requires integrated legal and technical analysis and advice.

Open Solaris and Java

As noted earlier, Sun Microsystems has not yet committed any of its software to GPLv3. It will be interesting to watch Sun's reaction to GPLv3.

Sun has released two of its major software products, Solaris and Java, under open source licenses. Sun released Solaris under the Community Development and Distribution License (CDDL) and Java under GPLv2. Sun CEO Jonathan Schwartz has indicated that Sun is considering moving both projects toward GPLv3 to gain favor with the open source community. "Will we GPL Solaris? We want to ensure we can interact with the GPL community and the Mozilla community and the BSD community," Schwartz said, referring to three major open source licenses. "I don't think we've been as effective as I'd like to be in going after the GPL community, because there's an awful lot of really bright people who think that's the license they prefer. That discussion is incredibly central to recruiting more developers around the world." Solaris would likely hold a dual-license, which is possible since Sun holds the copyright to all of the Solaris code.28

Regarding Java, Schwartz said in an interview, "We did version 2 with Java because version 3 wasn't out. When we have version 3, Java will likely go to 3."29

Some in the open source community are skeptical of Sun's intentions. Linus Torvalds for example, has made the "cynical prediction" that Sun "may be talking a lot more than they are or ever will be doing."30 Torvalds has cited Sun's reluctance to release any software that would benefit Linux, as Linux directly competes with Solaris. Torvalds also doubts that Sun would release the ZFS file system (which he refers to as "one of their very very few bright spots") under GPLv3. Torvalds asserts that Sun wants to benefit from having good relations with the open source community, while at the same time not participating fully in the community by contributing some of its more important software components.31

The Apache Software Foundation also has criticized Sun regarding the open source release of Java, pointing to an important component of Java, the Java Certification Kit (JCK), which has been left out of the release. Apache is the developer of Harmony, an open source version of Java, but is unable to certify that it runs correctly according to the Java standard without entering restrictive agreements with Sun that would negate the open source benefits of Harmony. Geir Magnusson,Jr., an Apache officer and vice president of the Java Community Process at Apache, was quoted regarding this issue:

The license that Sun is offering us would put restrictions on how our users could use our independent implementation of Java. For example, if users wanted to use Harmony to power an information kiosk at an airport or use it in an X ray machine alongside Linux, they could not. Sun considers that a use case that would be forbidden under the license.32

If the JCK were released by Sun without such restrictions, then Harmony potentially could be Java-certified and still keep its open source status.

Sun has cited "critically important" compatibility concerns, such as forking (where the development of Java would branch off into separate paths), as the rationale for not releasing the JCK on less restrictive terms.33 Sun has hinted that more availability, such as free usage for non-profit foundations, may come in the future.

"Cross-Licensing" Agreements between Microsoft and Open Source Firms

Expert:  Chris Parker replied 4 years ago.
Thanks. I will post the answer tomorrow.

Regards,
Chris
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to the third question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Thanks, XXXXX XXXXX alot of work coming your way. i hope you can help me out.
Expert:  Chris Parker replied 4 years ago.
You are welcome. I'll look forward to the new questions.

-Chris
Customer: replied 4 years ago.

Service Request SR-kf-013

 

  • 5-6 page section of the paper. This should include:

o Testing Process Summary: Define a test plan or script that identifies major software functionality and hardware to be tested along with the required outcomes.

o Installation Process and Training Plan Summary: Provide a time line that identifies the specific steps-including training-and related resources required to implement the recommended system. Include a narrative explanation that includes a discussion on the effects of project constraints, such as time, conversion method, etc. and a description of the recommended training plan.

o Documentation Plan Summary: Specify and explain each type of documentation required for ongoing support-technical and user-of the proposed system.

o Support and Maintenance Plan Summary: Provide a plan that outlines responsibilities and related resources necessary to support and maintain the proposed system-software, hardware, and networks.

Include citations and references using APA format.

o Due on Thursday please.

 

I would pay $50 for this work.

Customer: replied 4 years ago.
Friday would be fine.
Expert:  Chris Parker replied 4 years ago.
Considering my current work commitments, I am afraid I'll be unable to take up a long paper such as this at this point of time.

If you have any short-answer questions like before, I'll be happy to help you out.

Regards,
Chris

Edited by XXXXX XXXXX on 10/20/2010 at 5:26 PM EST
Customer: replied 4 years ago.

1. When does it make sense to outsource, specifically the maintenance and support of an application system? 200-300 words (due Thursday please)

 

2. According to Dibenedetto's (2007) article, what are the advantages of on demand software? Why is maintenance considered an advantage. 200-300 words (due friday please)

Expert:  Chris Parker replied 4 years ago.
Thank you. I will complete them before their respective deadlines.

Regards,
Chris
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to the first question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Expert:  Chris Parker replied 4 years ago.
---

Edited by XXXXX XXXXX on 10/23/2010 at 12:44 AM EST
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to the second question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.

What would some of the internal costs be that would have to be incurred by a company in order to keep up with changes or upgrades to SaaS software they may be using?

Two examples please. due monday. Thanks

Expert:  Chris Parker replied 4 years ago.
Thanks for the new question. i will take care of it.

Regards,
Chris
Customer: replied 4 years ago.

 

At work time would the work be ready please? I do apologize for the late notice.

Expert:  Chris Parker replied 4 years ago.
I can post the work by 5 PM EST. Hope that works for you.

Regards,
Chris
Customer: replied 4 years ago.
ok that would be fine.
Expert:  Chris Parker replied 4 years ago.

Download my response to the second question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Thanks
Expert:  Chris Parker replied 4 years ago.
You are welcome.

-Chris
Customer: replied 4 years ago.

In "Microsoft Begins Its Radical Shift to Software as a Service" (Orr, 2008), the author discusses the new software delivery model of Microsoft®. How will this model affect the way software is designed, built, and maintained? What special end-user considerations need to be considered?

Responses should be at least 200-300 words, and should consist of at least two full paragraphs with a single line between each paragraph. (Due wednesday night please.)

 

 

 

Orr, B. (2007, December). Microsoft begins its radical shift to software as a service. American Bankers Association. ABA Banking Journal, 99(12), 46.

Customer: replied 4 years ago.
Expert:  Chris Parker replied 4 years ago.
Hi!

The link the article is not working. Could you please fix it?

Regards,
Chris
Customer: replied 4 years ago.
Customer: replied 4 years ago.
ok i'll resend later
Customer: replied 4 years ago.

orr

Expert:  Chris Parker replied 4 years ago.
Got it! Thanks.

-Chris
Customer: replied 4 years ago.
What time would the work be ready please?
Expert:  Chris Parker replied 4 years ago.

Download my response from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Based on Mitchell's (2008) article, discuss something that a company's application development team would need to consider if the company decided to introduce Macintosh® into the company's computing environment.

Responses should be at least 200-300 words, and should consist of at least two full paragraphs with a single line between each paragraph. (Due friday.)





Mitchell, R. L. (2008, February 25). Macintosh insurrection. Computerworld, 42(9), 28.
Customer: replied 4 years ago.
Customer: replied 4 years ago.
Is work ready please?
Expert:  Chris Parker replied 4 years ago.
The link to the article you posted is not working. Could you please fix it?

Regards,
Chris

Edited by XXXXX XXXXX on 10/29/2010 at 6:06 PM EST
Customer: replied 4 years ago.
Robert L Mitchell. Computerworld. Framingham: Feb 25, 2008. Vol. 42, Iss. 9; pg. 28, 4 pgs

Abstract (Summary)

The "consumerization of IT" is leading Apple Inc into the enterprise, albeit through the back door, says Gartner Inc analyst Charles Smulders. The resurgence of interest in the Mac is a direct result of the evolution of increasingly Windows-friendly, Intel x86-based Macs and the introduction of Boot Camp, which allows a full Windows environment and its complement of applications to run natively in a separate hard-drive partition on any Mac. Eventually, as the corporate PC environment becomes fully virtualized, employers won't worry about the underlying hardware and operating system. Despite the Mac's promise, it still falls short for broad enterprise adoption today. When deploying Macs at scale, IT can't afford to be held hostage to a single vendor's supply chain problems. Smulders cautions that problems yet to be addressed include lagging support from middleware and enterprise software vendors, the complexities of adding another client hardware and software platform to the mix, and the lack of a second source for system hardware and parts.

» Jump to indexing (document details)

Full Text

(2092 words)
Copyright Computerworld, Inc. Feb 25, 2008

[Headnote]
Why it could happen in the enterprise. And why it probably won't. BY ROBERT L. MITCHELL

GUIDO SACCHI, CIO and senior vice president of corporate strategies at CompuCredit Corp., decided to go with the flow. He's allowing Macintoshes into the business when the requestor makes a valid business case. "If they think they can get better productivity on a Mac, so be it. Who am I to stop them?" he says.

Sacchi's attitude is a tacit acknowledgment that innovative technologies and those offering "superior user experience" are evolving in the home market, not the business arena. "The winning strategy is about providing tools to the users that pretty much resemble what they're doing at home," he says.

This "consumerization of IT" is leading Apple Inc. into the enterprise, albeit through the back door, says Gartner Inc. analyst Charles Smulders.

But might this also signal the stirrings of a bigger change - a Mac insurrection at the enterprise level?

If there are such stirrings, they're tentative, and Apple doesnt seem to be doing much to rally the troops. "We haven't seen a pledge by Apple to increase the level of support to the enterprise," says Smulders. "They continue to say that's not a market that they're focused on."

That didn't stop Dale Frantz, CIO at Auto Warehousing Co., which began migrating to Macs across 23 locations enterprisewide last year. Even so, Apple's lack of corporate focus concerns him. "The biggest weakness at this point I'd say is the lack of a cohesive enterprise strategy on the part of Apple," he says.

Apple itself appears confused. Asked to discuss its enterprise strategy with Computerworld, the company vacillated for several months but finally declined. According to a spokesman, the company does support corporate customers, but he declined to elaborate on Apple's enterprise strategy.

Apple may also need to keep its resources focused on those core areas - the consumer, education, creative, IT, science and small business markets - where it's seeing rapid growth. The company's strategy is simple, says Charles Edge, director of technology at 318 Inc., an IT consultancy: "Make a great computer that's standards-compliant. If enterprises want to use it, great, but if they don't, that's fine too."

It takes more than a great product to succeed as the primary personal computing platform in large businesses. "To go after the major corporate accounts, you need a savvy direct sales force [and] a dedicated service organization to take care of enterprise accounts. That's not Apple's heritage," says Tim Bajarin, president of consulting firm Creative Strategies Inc. Even so, he says, "I'm getting more and more questions about bringing Macs into the enterprise and what it would take."

Customer: replied 4 years ago.

Smulders also reports a rise in inquiries from enterprise customers. The increased interest is being driven by changes in what the Mac has to offer; by Apple's success in the consumer, small business and IT professional markets and other niches; and by broader trends in the enterprise, where Windows' grip on the desktop may be starting to loosen just a bit.

RETHINKING THE MAC

The Mac attraction is easy to understand. On the client side, Mac OS X is relatively easy to use. The addition of new features in the latest release, Leopard, only serves to burnish that reputation. Macs are considered more stable than Windows PCs, which means fewer help desk calls, and the machines currently present fewer security problems.

But that's not what has IT's attention.

The resurgence of interest in the Mac is a direct result of the evolution of increasingly Windows-friendly, Intel x86-based Macs and the introduction of Boot Camp, which allows a full Windows environment and its complement of applications to run natively in a separate hard-drive partition on any Mac. If Apple's earlier move to Intelbased hardware had IT management rethinking the Mac's role, the full integration of Boot Camp into the Leopard release of OS X has some openly talking about it. "It changed the game," says Doug Standley, a consultant in the technology innovation strategies group at Deloitte Consulting LLP.

Geiger Brothers Inc. already has 25 Mac users in its marketing group, but Mac use could expand in the future, says Joe Marshall, business analyst at the promotional products company. A few Macs use Parallels Inc.'s virtualization software to allow access to Windows business applications, but most of Geiger's 300 PCs remain on Windows.

Boot Camp is faster than software emulation packages such as Parallels, since Windows runs directly on the hardware - and it's free. Its integration into Leopard, Marshall says, may have leveled the playing field at Geiger and other companies. "There's a potential for Apple to make very large gains into the PC environment, and not just for graphic arts," he says.

On the server side, the constellation of Apple products - Xserve, Leopard Server and Xsan - is intended to serve the small-business and departmental islands of Macs in Apple's core markets. But Apple has also beefed up some features that are important to enterprise users. Integration problems with Microsoft's Active Directory have been resolved. Users can update their directory profiles, and digital signing is supported. The fact that OS X is based on the open Unix operating system and open standards such as Samba, NFS, RADIUS and LDAP also makes life easier for administrators.

With these changes, says Edge, Apple is "pushing toward bigger environments."

LICENSE TO SAVE

On the server side, Apple appears to have a licensing cost advantage. Its software licensing model was a primary reason why Frantz decided to standardize on Mac servers. Apple licenses Leopard Server on a per-server basis - no client access licenses are required to access file-sharing, e-mail, chat, shared calendars and other basic features.

But Apple has little momentum in larger organizations. For example, the MIT campus has about 3,000 Macs but just a few isolated Apple servers. It mostly uses Dell hardware running Windows or Linux. "I don't see [Apple] taking over the data center anytime soon," says Don Montabana, MIT's director of client support services. "You go with what works."

But Apple's success in the home and education markets has led to burgeoning grass-roots demand for Macs in many organizations, since more and more recent college graduates have Mac backgrounds these days. At Georgetown University Law Center, nearly 50% of the students are using Macs, up from less than 1% a few years ago, says CIO Pablo Molina. The same phenomenon is occurring at MIT, where 30% of all computers on campus are Macs, up from 20% last year. "This incredible rise in the use of Macs is going to put pressure on IT departments to support Macintosh PCs," Molina predicts.

Bajarin and Edge say their enterprise clients report that some new hires are lobbying for Macs. "The younger kids who grew up on Macs are frustrated with the tools they're being given," Bajarin says.

"It's a battle between corporate and the end users as to what is deployed," Smulders says. But ultimately, the choice of personal computer is not a popularity contest. "I don't believe we've gotten to the point where users are deciding," he says.

According to Standley, legacy integration and the associated conversion costs are the primary factors keeping Macs out of the enterprise. But those issues may be fading. As the adoption of Web technologies and virtualization increases, PC hardware and operating system are increasingly being abstracted away from existing enterprise applications, which have traditionally been closely aligned with Microsoft Windows. That has created a small opening for alternative platforms such as the Mac.

Some programs are being rewritten as Web-based applications; others have been moved to virtual environments such as Citrix Presentation Server. The latter execute the user's applications on back-end servers and require only a browser plug-in on the client for full access. Geiger Brothers' IT staff recently rewrote a shipping application to support a Web front end - the company's new standard. "Anything new is being coded to a browser as opposed to [Windows], for cross-platform compatibility," says Marshall.

Eventually, as the corporate PC environment becomes fully virtualized, employers won't worry about the underlying hardware and operating system. But, says Smulders, "we're still a few years away from that."

BACK TO REALITY

Despite the Mac's promise, it still falls short for broad enterprise adoption today. For Sacchi, supportability and total cost of ownership are deal-killers. "Can Apple make the case for themselves, understand all of the CIO

Customer: replied 4 years ago.
issues and help me solve them?" For now, he says, the answer is no.

Usually, Macs are more expensive when the purchase price and cost of support are factored in, Sacchi says. So although he's allowing Macs in, he hasn't changed his plans. "Because of the higher costs in an enterprise-level deployment, you have to have a justification in productivity. Right now, I see that only in specific niches," he says.

Smulders cautions that problems yet to be addressed include lagging support from middleware and enterprise software vendors, the complexities of adding another client hardware and software platform to the mix, and the lack of a second source for system hardware and parts.

MIT's Montabana confirms the first point. "The piece that's left is to get all of the ERP packages compatible with the Mac," he says. "For Oracle, SAP and [other enterprise software], the Mac clients always lag behind."

Configuring Macs to support Windows also adds complexity to the environment, with two operating systems and possibly emulation software to support. Boot Camp and virtualization software are a good interim solution for small groups of Mac users that need access to a few Windows applications, but Molina doesn't see that as a longterm strategy for larger populations.

Edge recommends using Citrix Presentation Server, rather than relying on Boot Camp or emulation software such as Parallels or VMware Fusion. "It's a lot cheaper to buy an Active Directory license and a Citrix license than to buy a copy of Parallels and XP or Vista and a copy of the application," he says.

But companies with enterprise licensing agreements don't have to worry about extra Windows licenses because they've already paid for them, says Marshall. But Parallels does represent an incremental licensing expense; it costs $80 per Mac before volume discounts.

Still, that's not Molina's point. "It's not the cost but the complexity of maintaining all of those environments. I don't see that as a viable mainstream option. You either stay in Windows or you switch to Macs," he says.

Another concern is that Apple has sometimes had trouble meeting demand for equipment and parts. And its forays into licensing its hardware to third parties - first with the Mac and more recently with its iPod - have not fared well.

Sacchi says finding an alternative source for parts is not a big deal for one department with a few Macs. "But if somebody is thinking about a complete enterprise replacement, that would be a concern," he adds.

When deploying Macs at scale, IT can't afford to be held hostage to a single vendor's supply chain problems. "Compared to where they were five years ago, [Apple's] supply chain and manufacturing is much tighter," Bajarin says. But MIT is experiencing problems right now. "Getting parts from Apple can be a very, very difficult process. It can take weeks," Montabana says. In contrast, his PC vendors deliver parts the next business day.

Service and support are also hurdles. "You're transferring to a platform from a vendor that's not committed to supporting large enterprise needs. From what we've seen, the tools available and the support are not enterpriseclass," Smulders says.

"In my mind, the service level has dropped from what it used to be," says Jim Quinlan, president of sporting goods retailer Jax Inc. in Fort Collins, Colo., which runs its business on Mac hardware and software. With no local Apple reseller, Jax must ship equipment back to Apple for service. If he can't wait, he must travel 70 miles to the nearest Apple store.

Edge points out that Apple offers enhanced support for larger customers, but the $50,000 price tag is high.

Quinlan doesn't plan to abandon the Mac. He says he has had no virus problems, the intuitive interface creates fewer support issues, and the hardware has been reliable. But most large businesses will likely remain insurrectionfree for the foreseeable future. "I don't think you'll see a significant penetration into the enterprise until Apple makes the strategic decision to go after that," says Bajarin.

On the other hand, if Apple continues to see more interest at the IT level, he says, "they'll adjust."

Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response from the following link: Click.

Please review and accept.

Regards,
Chris

Edited by XXXXX XXXXX on 10/30/2010 at 7:39 AM EST
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.

Based on the article, "Changing the Corporate IT Development Model: Tapping the Power of Grassroots Computing" (Cherbakov, Bravery, Goodman, Pandya, Baggett, 2007), discuss how grassroots computing changes the way software is designed, developed, tested, and maintained in a typical organization. Responses should be at least 200-300 words, and should consist of at least two full paragraphs with a single line between each paragraph. (Due wednesday night please.)

Cherbakov, L., Bravery, A., Goodman, B. D., Pandya, A., & Baggett, J. (2007, October-December). Changing the corporate IT development model: Tapping the power of grassroots computing. IBM Systems Journal, 46(4), 1.

Customer: replied 4 years ago.

[Headnote]
The recent rise of grassroots computing among both professional programmers and knowledge workers highlights an alternative approach to software development in the enterprise: Situational applications are created rapidly by teams or individuals who best understand the business need, but without the overhead and formality of traditional information technology (IT) methods. Corporate IT will be increasingly challenged to facilitate the development, integration, and management of both situational and enterprise applications. In this paper, we describe the emerging prevalence of situational application development and the changing role of IT. We also describe the experience at IBM in building, deploying, and managing the IBM Situational Applications Environment that enables employees to take responsibility for some of their own solutions. Finally, we discuss ways in which the situational application development paradigm may evolve in coming years to benefit enterprises, the demands that it will put on IT departments, and possible ways to address these challenges.

INTRODUCTION

The corporate information technology (IT) approach to solution development has been dominated by concerns for performance, availability, and security. Budget realities have limited corporate-sponsored projects to those with the highest impact, leaving many needs unfilled. Furthermore, many commercial software applications and homegrown IT solutions ". . . tend to be badly designed, badly made, incomprehensible and obsolete . . ." 1 Long development cycles often result in applications that are unable to support evolved business needs. Enduser efforts to address these gaps outside of the realm of corporate IT have been viewed, at best, XXXXX XXXXX

The recent rise of Web-based ad hoc computing among both professional programmers and business professionals brings into the spotlight a softwaredevelopment approach that diverges from traditional IT methods: Teams or individuals who best understand their business problem rapidly create informal solutions to solve it. Not burdened by the overhead and formality of traditional IT methods, these casual developers focus on fast, good-enough results that can be refined later, if needed. Applications developed in this manner may not be ideal. They may be slow or deliver only a subset of possible functions; yet, they provide immediate relief for a given situation. These situational applications (applications written to address particular situations at hand) are often short-lived or perpetually improved. This development approach- in combination with increased softwareoriented thinking, the growing popularity of serverside scripting, and new Web technologies such as AJAX (Asynchronous JavaScript** and XML)-is forcing a reevaluation of corporate enterprise software-development models.

This new breed of applications, often developed by nonprofessional programmers in an iterative and collaborative way, shortens the traditional development process of edit, compile, test, and run. Situational applications are seldom developed from scratch; rather, they are assembled from existing building blocks (or consumables, as they are referred to here). They are often used by a relatively small number of users (less than 50, according to a 2005 IBM-sponsored market research study on the growth of situational applications and the new market for ad hoc development). Developers expect improved productivity and functionality from their situational applications, and they expect to greatly shorten the time from the identification of a need to using a productive application that fills it. These solutions can potentially solve immediate business challenges in a cost-effective way, capture a part of IT that directly impacts knowledge workers, 2 and address areas that were previously unaffordable or of low priority to the IT department. Application builders also report higher satisfaction with their jobs and a sense of being in control. The previously mentioned IBM-sponsored market research shows that users of situational applications feel that they are of core importance. More than half of situational- application users view them as mission critical and rate them as very important to the success of their everyday activities, their department, and their company. Moreover, this view is shared by the corporate hierarchy all the way up to company executives.

The way workers view their workplace is changing, especially as the new generation, millennials, 3 are starting to join the workforce. These new employees have different expectations, skills, and values. 3-5 After all, they are the first generation to grow up with IT as an inseparable part of their environment. Because they are used to customizing and individualizing everything-from phone ring tones to their Facebook** 5 spaces-when they move into a workspace, they translate these experiences into wanting to select their own tools, customize their environment, and take responsibility for automating many necessary activities.

By contrast, IT department managers-who have justifiable concerns with reliability and availability of corporate systems, data privacy, and security and who are faced with decreasing budgets-often tend to be conservative in their adoption of new technologies and agile development methods. As a result, corporate IT is often seen as unable to support the business and can be perceived as a hindrance to rather than an enabler of innovation. 6 During the last 30 years, while languages, platforms, and tools have changed significantly, IT solutiondevelopment processes have changed very little.

Understanding and taking advantage of the latest changes in Web computing has the potential of significantly improving the effectiveness of corporate computing. These changes include shifts in both technology and usage patterns, collectively referred to as Web 2.0, a term coined by Tim O'Reilly. 7 As we will show, using Web development in enterprise computing has the potential to fundamentally transform the role of the IT department from solution developer to solution enabler, 8 a change that corporate IT must make to remain relevant.

The remainder of this paper is organized as follows. In the section "Emergence of situational applications," we provide the context to recent changes that signify a renewed approach to application development. In the section "IBM Situational Applications Environment," we describe our experience building an environment to support a situational applicationbased approach (sometimes referred to as community- based computing), the challenges that we faced, and the issues that we addressed during its construction. In the section "Changing role of corporate IT," we examine changes already taking place in the enterprise and others that are likely to happen. We conclude the paper with a brief summary.

EMERGENCE OF SITUATIONAL APPLICATIONS

Evidence of end-user computing (including performing software engineering and development) goes back as early as the late 1970s, 9 with advances in the last 10 years making it increasingly easier for users to develop their own solutions. IBM-sponsored ad hoc development market research, described later in this section, and that of others 10 has revealed that development of applications by amateur programmers (i.e., employees who are not paid to program) is widespread. In 2006, approximately 12 million professionals identified themselves as programming in the workplace. Contrast that with the fact that there are only an estimated three million professional programmers (i.e., employees who program for a living). 10

Both professional and casual programmers are engaged in some ad hoc application development characterized by the lack of formal engagement around a solution. They disregard formal requirements- gathering, architectural documents, and design specifications; instead, they focus on addressing immediate needs in the fastest possible way. IBM research shows that between 42 and 68 percent of IT employees and 12 percent of business employees have automated a business function, process, or activity in their department outside of a formal IT development project.

IBM research reveals that ad hoc application development activity tends to extend throughout the company (Figure 1). Although spreadsheet-based applications remain a prevailing choice for ad hoc programming, our research shows that Web development is rapidly gaining popularity. In the remainder of this paper, we focus on the subset of ad hoc applications developed using Web technologies and refer to them as situational applications.

Paradigm changes in Web development

Typical Web development requires a variety of skills at several layers, from the browser at the front end, to specialized middleware (e.g., IBM WebSphere* Application Server or IBM WebSphere Portal), to back-end database systems where the applicationprogramming skills required are beyond the ability of a nonprogrammer. Languages such as Hop 11 and environments like Marmite 12 and IBM Sash Weblications 13 were introduced to simplify Web development. Continued work on rich client technologies demonstrates that the principles of lightweight development based on simpler Web technologies and skills have a value beyond simply easing application building. 14 Interactions with the Web platform are increasingly more compelling than even rich desktop applications. Web workflows can be collapsed into a single-screen experience, thereby gaining the benefit of desktop applications but with the simplicity and flexibility offered by Web development.

Customer: replied 4 years ago.

AJAX provides easy access to Web-based data and rich user-interface controls. The combination of AJAX and the REST (Representational State Transfer) architectural style of Web services offers an accessible pallet to assemble highly interactive browser-based applications. The Uniform Resource Identifiers (URIs) used by REST to identify Web resources allow equal accessibility to those resources from browsers, mobile devices, and server applications, and they link from e-mails and bookmarks, making this programming style very appealing to a wide range of Web developers.

The availability of a large number of simple application programming interfaces (APIs) and the enablement of AJAX-style Web components (e.g., Yahoo!** Developer Network 15 design patterns and programmable Web APIs 16 ) have contributed to the upsurge in popularity of this development style with both professional and amateur programmers. Even older Web technologies such as JavaScript are reinvigorated.

Many mashups (applications comprised of services and functions remixed to create a new context) embed a map into a Web page, where various actions drive the map to plot objects of interest, such as people, structures, or geographic locations. As far back as the mid-1990s, exposing geographic data to end-user developers was a popular activity. 17 The more recent variety illustrates the tipping point, where AJAX widget components enable rampant reuse. (A widget is a third-party item that can be embedded in a Web page.) The mapping mashup has become the prototypical situational application, primarily because of its simplicity and because it is a powerful paradigm for information organization. Combining services through simple interfaces and prebuilt components enables a nonprofessional developer to become an assembler- someone who understands the business problem and is comfortable with Web technology, but needs simpler concepts to assemble powerful solutions.

The rise of situational applications cannot be attributed to technological changes alone. Computer literacy is growing and, while the range of skills varies widely, the tooling is evolving to enable more users to build applications. Recent work to make Web development accessible to casual programmers includes assembly-level tooling that enables users to create composite applications out of components, even if they have little technical knowledge of the underlying capabilities. Examples of such platforms include QEDWiki (quick and easily done wiki), 18,19 ADIEU (Ad Hoc Development and Integration Tool for End Users), 20 and more informally, wiki platforms such as SnipSnap, 21 which enable a high degree of extension and customization. As tool design matures, even professionals who are uncomfortable with current Web technologies will be able to participate in their own solution design.

Social software and worker expectations

The introduction of social software (for example, blogs, wikis, activity management, tagging, and bookmarking) is contributing to the proliferation of situational applications. Social software offers data and widget services that enable other applications to offer capabilities that are hosted remotely in a new context. For example, IBM Dogear, 22 an enterprise social bookmarking solution, offers REST-style interfaces to data and provides a user experience that is notably like AJAX, where much of the user experience occurs on the same page without changing contexts. For example, when exploring related content and people, the user is able to toggle between these views without reloading the entire Web page. Third-party applications make use of data stored in Dogear through the REST-style interfaces, and widget components further externalize reusable user interaction and visual design.

The evolution of tooling, skills, and usage patterns contribute to why community-based computing is seen as a high-potential opportunity. Enabling this development paradigm offers the opportunity to simplify IT to the point where the gatherer of requirements, the solution owner, and the developer are one and the same person, thereby ensuring that the solution delivered meets the immediate business need.

IBM ad hoc development market research

To better understand the market composed of nontraditional programmers performing ad hoc software development, IBM conducted a multiphase primary market research project. The objectives of this research can be summarized as follows:

1. Quantitatively profile the current ad hoc development activities and needs among different audiences.

2. Identify which ad hoc development tools are currently being used and assess the level of user satisfaction with these tools.

3. Gauge the relative market opportunity within those audiences.

4. Understand specific activities, preferences, and related factors in ad hoc development.

5. Determine and compare interest levels in an ad hoc development among the various audiences.

Participants were screened based on two criteria. First, professional developers who spent more than 50 percent of their time on formal application development projects using sophisticated development programming languages and tools, such as Cþþ, C#, Java**, and advanced integrated development environments were screened out. Second, all participants were required to have conducted an ad hoc development activity within the previous 12 months. This activity was defined to participants as occuring when a person automates or facilitates a particular business function, process, or activity by producing a software application that can be described by these characteristics:

* Often incorporates other existing software-In addition to any added capability, this new application can modify, enhance, customize, or extend an existing application, or include and combine parts or components from multiple existing applications.

* Occurs under the radar-Usually not recognized outside of a department or business unit as a formal project; seldom has a specific project budget or tracked timeline (as do larger, more recognized IT projects); tends to be performed and managed in a relatively unstructured manner.

* Built for the situation at hand-Built to solve an immediate, specific business problem, with little concern over whether the application will fit or work in different situations, organizations, environments, or systems, and without features that might allow it to adapt or adjust for more long-lived usage across multiple situations. Could even be thought of as disposible or replaceable.

* Developed in the most efficient, quick-and-dirty manner possible-Does not use rigorous and structured steps of formal development methods meant to reduce errors, maximize efficiency and performance, extend the life, or expand usage through future changes.

* Can be performed by people without extensive, sophisticated computer skills-Business professionals, analysts, and other ITstaff often are engaged in ad hoc development. Requires business knowledge of the task at hand, but not very specialized programming knowledge or extensive IT skills.

* Developed using tools and components that do not require significant IT knowledge-Unlike advanced programming tools used to build an application from scratch, ad hoc development employs more basic tools, such as macros, wizards, forms, templates, visual construction, and the like. It usually makes use of preexisting software components, such as spreadsheets, database programs, report generators, or vertical business programs already in use.

A total of 790 Web-based interviews were completed with three separate target audiences:

* IT (excluding professional programmers)-250 interviews with IT managers/directors/staff

* Business partners or solution providers-250 interviews with solutions providers or partners who have performed ad hoc development activities for customers

* Line-of-business power users-290 interviews with non-IT but computer-savvy line-of-business power users

A complete mix of industries and line-of-business functions and departments were surveyed, with each group being large enough to evaluate as a subsegment. Government agencies were excluded.

In addition to those completing the interviews, over 25,000 successful contacts were made across the three audiences in this research. Regardless of ultimate qualification, as long as a respondent had an appropriate job function and reponsibilty, the following incidence information was collected before interview termination:

a. Percent who understand the ad hoc development definition as provided

b. Percent who say that ad hoc development is ever conducted by anyone in their company

c. Percent who say they have conducted ad hoc development personally in the last 12 months

The interview asked responders a series of over 60 questions, grouped into seven categories:

1. Frequency and scope of their ad hoc development activities

2. Specific application and activity areas in which they have conducted, or plan to conduct, ad hoc development

3. Types of people, including themselves, who are involved in the ad hoc application development in their organization, and their various roles

4. Business value, top business benefits, and the reasons that drive the decision to conduct ad hoc development

5. Level of encouragement or discouragement they receive from other parties (e.g., team leaders, IT, and clients) in terms of conducting ad hoc activities, and types of benefits or barriers encountered

6. Perceived importance of ad hoc application development by the developers and by others in their organization, including various levels of management

7. The tools and mechanisms used in ad hoc development, and the level of satisfaction and desired features

The survey results were used to make conclusions about the future marketplace, trends and opportunities, mechanisms for targeting and selling to this market, and market size assessment.

Several findings from this market research are referenced throughout this paper. In addition to those mentioned specifically, the findings also helped define the need for and scope of the IBM Situational Applications Environment, described in the next section.

THE IBM SITUATIONAL APPLICATIONS ENVIRONMENT

In the future, situational applications may become more challenging to IT development methods and place new demands on the enterprise IT environment. This could put corporate IT in the position of managing enterprise applications while trying to determine how to best facilitate development, deployment, and management of situational applications. Community-based development within the enterprise may significantly increase heterogeneity in the environment and introduce more complexity into monitoring, event analysis, root-cause detection, patch management, and other systems management tasks. Conversely, development based on situational applications can present opportunities to encourage innovation at departmental and individual levels and, at the same time, improve the productivity of knowledge workers.

Situational applications can enable workers to react quickly to changing needs with just-in-time solutions that are a better fit to some business problems. In addition, by embracing this development paradigm, IT enables the automation of business areas that were not affordable or were considered too narrow a niche before-a phenomenon sometimes called the long tail, a term first coined and popularized by XXXXX XXXXX. 23

To accelerate the adoption of situational applications in IBM and to test the potential benefits of community-based development in the enterprise, the office of the chief information officer (CIO) established an initiative called the Situational Applications Environment (SAE). Envisioned as a living-laboratory experiment to observe and harvest best practices, SAE is enabling an increasing number of employees to benefit from the use, creation, and sharing of situational applications.

SAE scope

The IBM intranet contains an enormous wealth of information, services, and community spaces covering all aspects of the business from research, product information, and marketing materials to business operations, personnel data, and social events. Some users are repurposing this information to meet their particular business needs, increasingly using Web technologies and the growing number of available internal and external services. SAE was conceived to recognize, encourage, and build a community around the activity of constructing situational applications so that solutions and best practices could be shared throughout IBM. As part of this initiative, corporate IT assumed the role of solution enabler by providing the tools and data services required by this community. SAE facilities built around three focus areas-consumables, tools and utilities, and community-are discussed in the next sections.

Consumables

A consumable is a building block used in application construction. It can be a service with a recognizable API called to obtain data, a code snippet that can be incorporated into a server-side script, or a JavaScript fragment to enhance a Web page. Without a substantial collection of consumables, the ability to build new applications is severely restricted. On the other hand, if there are a large number of consumables but they are difficult to find and understand, the adoption of situational application development will be inhibited.

Tools and utilities

Situational applications are built by combining consumables to create new capabilities, usually with some mediating logic and user-interface components. In this process, inevitably some integration code is required, and the resulting entity then needs a place in which it can be deployed. Included in tools and utilities are a Web presence for raising awareness of new situational applications and consumables, catalogs for locating and describing these assets, mashup makers for assembling applications, and lightweight hosting facilities for running applications and consumables once they are built.

Community

Several community aspects are important to the adoption of situational applications:

* Collaboration-The primary community of people who need a solution work on the application together, sharing it and improving it.

* Wide communication-As new applications and consumables are created, they are more likely to be exploited and reused if their existence is advertised widely outside of the primary community that created them.

* Feedback-Interested parties can comment on, suggest improvements for, or even share their original work adaptations.

SAE architecture

The requirements derived from consumables, tools and utilities, and community, as described above, shaped SAE and its architecture.

SAE Web site

The SAE Web site (Figure 2) is designed as a central hub for situational applications in IBM. The home page presents the latest and most popular applications and consumables, news items, latest forum threads, and guidance for developers. Other pages focus on available facilities; recommended processes for building, hosting and advertising applications and consumables; and help in the form of frequently asked questions.

SAE catalog

The SAE catalog (Figure 3) stores details of applications and consumables entered by an interested party, usually the owner. The details include minimal categorization augmented with tagging. Further tags can be added by the community, and both the entry and tags can be rated based on popularity and relevance. This user-generated taxonomy, or folksonomy as it is becoming known, is then used to filter entries along with more traditional keyword search techniques. Users can comment on entries and share their own usage examples.

The collaborative feedback loop implemented in the catalog design traces its roots to the open source movement. The feedback between the community and the developer is direct, allowing issues to be identified, acknowledged, and resolved, and resolutions praised. Community members take pride in their contributions, which can consist of writing the software, proposing new features, and identifying or fixing bugs. Everyone is solving the common problem so even the smallest recognition (a comment or a forum post) feeds the cycle of participation. Users can subscribe to most of the catalog information so that updates and other activities can be streamed to interested users in a push rather than a pull manner.

Figure 4 shows how the catalog asset details are delivered to the SAE Web site by means of cached data feed. Users can navigate either directly to the application in which they are interested or to an entry in the catalog, where they can learn how to use APIs, provide their comments, or rate an asset. The catalog can be used to record details of assets hosted within or outside SAE and for any external services and applications that the community might find of particular interest.

Hosting

There are two SAE hosting offerings to meet different user requirements. The first type is a lightweight virtual hosting environment akin to a simple internal Internet service-provider offering. It includes a Web server, server-side scripting capability, and data storage. Users can upload code and other artifacts and make small configuration changes that affect only their virtual host. They are not allowed to make changes to system-wide configurations or to have root access. This option is more than adequate for most application developers who do not perform system administration tasks or complex system configuration or manage specialized middleware and back-end software.

The second type of hosted offering provides the higher-level tools that allow users to create content and to function with little or no coding. These environments, or mashup makers as they are becoming known, are built on the idea of wikis. They are community-edited Web sites in which a high-level markup language, or preferably a set of sophisticated graphical tools, are used to create pages of content with application function. Users build pages from a palette of components, test, and then share the resulting application without infrastructure considerations. The host platform manages the complete cycle of assemble, run, and share, exploiting rich client-side components to give the user an integrated development experience. Several mashup makers are emerging. Two of the more mature examples, ADIEU 20 and QEDWiki, 19 are briefly described later in the section "Mashup makers".

Expert:  Chris Parker replied 4 years ago.
Ok. I will take care of it.

-Chris
Customer: replied 4 years ago.
Thanks
Customer: replied 4 years ago.

Based on the article, "The Road to Our Scripting Future" (Yared, 2007), discuss the relevance of structured programming techniques in the development of applications for grid computing. Responses should be at least 200-300 words, and should consist of at least two full paragraphs with a single line between each paragraph. (Due Thursday please.)

Yared, P. (2007, March). The road to our scripting future. Dr. Dobb's Journal, 32(3), 28.

Customer: replied 4 years ago.
Expert:  Chris Parker replied 4 years ago.
Some unexpected personal work has come up. Is it ok if I provide the first answer early tomorrow morning?

Regards,
Chris
Customer: replied 4 years ago.
yes thats fine
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to the first question from the following link: Click.

Please review and accept.

The article link for the second question is not working. Could you please fix it?

Regards,
Chris

Edited by XXXXX XXXXX on 11/4/2010 at 6:36 PM EST
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.

To execute, applications must be presented to the computer as binary-coded machine instructions specific to a given CPU model or family. However, programmers hace a number of language options for genrating those machine intructions. Perhaps most relevent here is the degree fof abstraction a language provides. More abstraction means fewer operations for developers to direct.

Machine languages are the native languages of computers, the only languages directly understood by CPUs. With the exception of programmable microcode, machine languages are the lowest level of programming language. As such, they offer no abstraction whatsoever. Consisting entirely of numbers, machine languages are rarely used to write programs because developers must manually code-in numerical code-each and every instruction associated with the application's business logic, as well as its underlying services such as sockets, registers, memory addresses, and call stacks.

Considering the labor associated with machine languages, developers desiring complete control over all aspects of application performance normally use assembly language. Machine languages and assembly languages contain the same instructions, making them essentially the same thing. The advantage of assembly languages is the thin layer of abstraction they create by presenting instructions in the form of names. These mnemonic instructions make it easier to write programs, which are then transformed into machine language by assemblers.

Midlevel programming languages provide the next level of abstraction, while letting programmers maintain a high degree of overall control. Typified by C, midlevel languages provide low-level access to memory and require you to explicitly code much of the application's underlying services. Yet these languages can also relieve you of other duties, such as coding functions, variables, and expression evaluation.

Perhaps one of the most significant advantages by most midlevel languages is portability, which enables machine-independent coding. Unlike high-level languages, though, the portability enabled by midlevel languages is not based on a virtual machine or a common machine-independent environment Rather, the application is compiled for different computer platforms and operating systems with minimal change to its source code.

High-level programming languages allow an even greater degree of abstraction, so you can more fully focus attention on the application's business logic instead of the services required to support the CPU. High-level languages often handle thread management, garbage collection, APIs, and other services natively. lava, for example, relies on a virtual machine that abstracts all operating system functions to provide its famous "write once, run anywhere" capability. Other high-level languages include a variety of interpreted and compiled languages including Basic, CH-, C#, Cobal, Perl, PHP, and Python.

Finally, natural languages deserve mention. Simply put natural languages overwhelm the human/machine interface. Huge, continually expanding vocabularies with shifting meanings and byzantine grammar mat is inconsistently employed renders natural languages unsuitable for computers.

High-level languages simplify complex programming while low-level languages tend to produce more efficient code. Using high-level languages, you can break up a complex application into smaller components, although the trade-off for convenience is most often code efficiency. Consequently, when applications must meet certain performance standards, developers may forego the ease of coding in high-level languages and opt for lower level languages.

Computing Architecture Continuum

From the mainframe to grid computing, each computing architecture has developed in response to the demands organizations place on their IT departments. Similar to the range of programming language options, each computing architecture in Figure 1 presents a unique environment.

Mainframe computers use a host/terminal architecture, whereby all of the application processing executes on the mainframe host Multiple users can simultaneously access the mainframe via local or remote "dumb" terminals (or terminal emulation software), which simply display queries and results. Introduced in the 1950s, mainframes remain popular in large organizations needing extreme reliability, availability, and serviceability.

The mainframe is ideal for missioncritical applications that process bulk data such as credit-card processing, bank account management, market trading, and ERP. Applications that require high security are another mainframe strength. Today's leading mainframe vendors include IBM, Hewlett-Packard, and Unisys.

Minicomputers employ the same host/terminal architecture as mainframes but typically serve a smaller user population. Launched in 1959, the minicomputer era was ushered by Digital Equipment Corporation and the introduction of its PDP-I. Selling for an amazingly low $120,000, the PDP-1 extended the reach of computing to a broader audience.

Over time, minicomputers basically morphed into midrange systems and servers, but their function remains the same-processing applications for multiple users. In small and midsize businesses, midrange systems usually run general business applications. Large enterprises generally use them for department-level operations. Vendors include IBM, Hewlett-Packard, and Sun Microsystems.

Moving away from the hosting model, client/server architecture splits the application-processing load between one or more servers and the user's client computer. Client/server encourages IT departments to select the appropriate hardware and software platforms for client and server functions. For example, database management system servers frequently run on platforms specially designed and configured to perform queries, while file servers usually run on platforms with special elements for managing files.

Client/server was a response to monolithic, isolated applications running on minicomputers and mainframes. seeking integrated, responsive, and comprehensive applications, companies turned to client/server architecture to support the complete range of their business processes-from call centers to CRM and beyond. Leading client/server vendors include Oracle and PowerSoft.

The Internet computing model defined by the World Wide Web introduced a new twist to client/server's distributed processing model. Whereas client/server relied on dedicated client-side software to run applications, Internet computing relies on one client application-the web browser-to present the GUIs of countless applications while back-end servers process the bulk of the application.

The shift from the client/server "fat client" to the Internet "thin client" brings huge benefits. Software upgrades are made solely at the server and no longer include a client-side component that has to be distributed to the user base. Meanwhile, applications both inside and outside the firewall give authorized users ready access to any web-enabled application, from company newsletters and HR benefits to e-commerce and financial services. Leading Internet vendors include Sun and BEA.

Grid computing architecture is emerging to let companies flatten the dominant threetier Internet architecture. Today, the back end of a standard web application is processed by a low-end web server, a high-end application server, and a high-end database server or some other data store. Grids can meld the web server and application server tiers into a single tier of parallel, commodity servers running Linux.

A web application deployed on a grid architecture offers significant dollar savings over the same application running on three tiers. In different hardware configurations, grid computing is being used successfully on a variety of applications, from modeling financial markets and simulating earthquakes to serving millions of web pages per day. In the grid arena, the leading technologies are Linux and x86-based computers.

Programming Language Progression

As computing platforms shift (see Figure 2), languages of choice shift as well. While Cobol dominated the mainframe and minicomputer eras, the client/server era may have presented developers with the most language options. Developers could choose from a number of popular languages, including Microsoft's Visual Basic, Borland's Delphi, PowerSoft's PowerScript, and others. These languages were all essentially somewhattyped, pseudo-interpreted languages. And they were all replaced with Java, a strongly typed, pseudo-interpreted language and Visual Basic .NET, a somewhat-typed, pseudo-interpreted language.

During the Internet era, organizations ran a variety of server operating systems in the middle tier, including Solaris, AK, HP-UX, Irix, and Windows NT. In many companies, a strong requirement was that applications be portable across two or more platforms to avoid being locked into a single vendor. If an organization's applications only ran on one platform, the organization lost much of its bargaining leverage and the vendor could price gouge in the next upgrade cycle.

Java was originally designed to run on set-top clients and then on PC clients, so the language and its runtime were designed to be portable and undoubtedly met this goal. Using servers from NetDynamics and KIVA, some companies had already started running Java on the server. In addition, Java offered some of the benefits developers enjoyed in languages from their client/server days, such as garbage collection and higher level APIs to operating system features that abstracted complexity.

Java soon gained a critical mass of vendors who supported the platform. Everything under the sun soon had a Java API, including Oracle, SAP, Tibco, CICS, MQSeries, and so on. Over a couple of years, these applications and services were all accessible via standardized APIs that grew into J2EE, which went on to dominate the corporate computing environment of the Internet era.

What Java failed to provide was 4GL-type tools. However, no other language had 4GL-type tools for web applications, so their absence was no surprise. Unfortunately, years have passed, and the vast majority of J2EE applications are still built by hand. A lesson that Microsoft has learned is that for APIs to be toolable, they need to be developed concurrently with the tool. Moreover, both APIs and tools should depend on easily externalized metadata. Java APIs were always written on the merits of the APIs themselves, and subsequent tools were predominantly code generators shunned by programmers.

The Java APIs grew into a morass of inconsistent and incomprehensible APIs. Programming even the simplest application proved to be complicated. According to Gartner, more than 80 percent of J2EE deployments are servlet/JSP-to-JDBC applications. That is, the vast majority of these applications are basically HTML front-ends to relational databases. Ironically, much of what makes Java complicated is the myriad of band-aid extensions, such as generics and JSP templates, which were added to simplify development of such basic applications.

Despite these issues, Java and J2EE have come to completely dominate the Internet era of corporate computing. These technologies will remain dominant until companies begin their migration to next-generation grid architectures and their related languages.

The Rise of Grid

Grid computing takes advantage of networked computers, creating a virtual environment that distributes application processing across a parallel infrastructure. Grids can employ a number of computational models to achieve their goal of high throughput.

Heterogeneous grid computing relies on a mix of different, geographically distributed computers to solve massive computational problems such as simulating earthquakes. Mainframes in California and Massachusetts may work with clusters of midrange systems in China and thousands of PCs across Europe to solve a single problem.

The drive to heterogeneous grid computing arose from sheer frustration. With limited access to scarce, expensive resources such as supercomputers, users recognized that compute-intensive problems could be broken up and distributed across multiple, lower cost machines that were readily available. Typically, the resulting calculations could be delivered faster and more cost effectively.

With the advantages of grid computing, the appearance of homogeneous grids simply reflects the fact that clusters of low-cost, homogenous PCs running Linux can be a genuine alternative to higher priced computer architectures. Numerous Wall Street firms now run complex financial simulations such as Monte Carlo calculations on large clusters of Linux machines.

On the Web, the massive throughput offered by grid computing takes on a new meaning. Rather than focus on solving a single problem-sequencing the human genome, for example-a grid can focus on executing a single task, such as serving web pages. Web portals such as Google, Yahoo, and Amazon all have demonstrated the efficacy of running thousands of commodity Linux machines as web servers.

Clearly, the grid architecture works for many well-known Internet companies. Now, users are starting to move transactional applications onto the grid architecture. The all-or-nothing nature of transactional applications can make moving to commodity grid computing a delicate matter for companies that are used to running these applications on high-end architectures that are perceived as more robust and reliable. On the other hand, the grid advantages can prove to be an irresistible lure.

Grid Languages

Regardless of when transactional applications ultimately wind up on a grid, IT is already engaging in a subtle paradigm shift, moving away from larger SMP boxes running proprietary flavors of UNIX, and moving toward large grids of one- to two-processor x86 machines running Linux. These machines already dominate the front-tier web server market Now, they are starting to appear on the back end with products like Oracle RAC, the grid-enabled version of Oracle. The transition to grid will soon affect the middle tier, but it is held back by J2EE implementations. These apps were built to run on small clusters of multiprocessor machines rather than large clusters of unit-processor machines.

Unlike earlier architectures, grid has no pressing requirement for portability. Companies are no longer locked in by a vendor when they run Linux on x86 white boxes. Consequently, they have no problem with applications that only run on Linux/x86. The footnote to this portability rule concerns corporations that require applications be developed on Windows-based machines. For these companies, the only portability requirement is the ability to develop on Windows and deploy on Linux.

Basically, today's corporate applications all produce text, whether HTML for web browsers or XML for other applications. With the onslaught of web services, all back-end resources will soon be providing XML rather than binary data. The average corporate application will be a big text pump, taking in XML from the back end, transforming it somewhat, and producing either HTML or XML.

With this in mind, clear requirements emerge for a programming language best suited to support corporate applications in a grid environment:

* Fast handling of XML (dynamic data with fluctuating types).

* Fast processing of text into objects and out of objects.

* Optimal handling of control flow, which is the bulk of most applications' limited logic.

* Minimal portability (Linux/x86 and Windows/x86).

* Minimal abstraction (very thin veneer over the operating system for system services).

* Specific tuning for one- or two-processor x86 machines.

Considering these requirements, Java does not fare well:

* Java is a strongly typed language that does not easily handle XML data, which is inherently unstructured.

* Java is painfully slow at processing text because it cannot manipulate strings directly.

* Java is great for complicated applications but not ideally suited for specifying control flow.

* Java provides maximum portability, which is overkill for grid apps.

* Java provides maximum abstraction with a huge virtual machine that sits between the application and the operating system and is overkill for grid applications.

* Most J2EE implementations are tuned for 4-16 processor SMP boxes.

For applications deployed on grid architectures, Java does not suffice. What developers need is a scripting language that is loosely typed to facilitate XML encapsulation and that can efficiently process text. The language should be very well suited for specifying control flow: It should be a thin veneer over the operating system.

Most Linux distributions already bundle three such languages-PHP, Python, and Perl. PHP is by far the most popular. Python is considered the most elegant, if not odd. Perl is the tried-and-true workhorse. AlJ three languages are open source and free. As Figure 3 illustrates, PHP use has skyrocketed over the past few years.

Grid Concerns

Like the computing architectures and languages that came before it, grid comes with its own set of challenges and trade-offs. For example, there are various additional semantics and failure modes associated with grid's asynchronous programming model, especially in large-scale distributed applications.

Perhaps the biggest difference is that the software needs to expect that machines will fail, and fail regularly. This means redundancy must be built into the software layer. When invoking logic, programmers should not be thinking about calling a specific machine, which is the traditional synchronous RPC model. Instead, programmers should think about invoking a service. At runtime, that service could in fact be running on the same machine or on different machines.

The biggest conceptual abstraction that programmers need to understand is that applications need to evolve into a set of services so they can be spread across a grid. Having a main event loop and running sequential logic can deploy to a grid, but this type of model scales vertically on hardware, not horizontally.

Clearly, the massively scalable web sites in use today represent the first large-scale use of this type of architecture. Web interactions are inherently atomic. For instance, user A's shopping cart has nothing to do with user B's shopping cart. User A's credit card can be processed independently of user B's credit card. In an e-commerce site, the only resource these users really have to share is the inventory system and an external shipper's tracking service.

Grid has evolved from numerical computing-where things like airbag simulations could be split up among numerous machines-to serving multitudes of web users. The next level up is servicing requests on shared data; for example, searching via Google search and browsing social networks. Google has it a bit easier because they don't really care if a search is slightly different every time you search, so they can gradually update the indexes across clusters. That problem is a bit fuzzier and users don't notice.

Social networks are a bit different, and they have had a lot of problems scaling. Essentially, the entire object graph of social relationships has to be accessible in real time by all of the independent web users. Friendster (www.friendster.com/) solved this problem by using PHP to service the web requests and using a back-end service that had the object graph in memory. In this hybrid model, the social network construct is essentially considered a back-end service, like an inventory system.

A final concern revolves around maintenance. These systems are incredibly hard to debug. There have been a lot of homegrown tools to do this, but it is an emerging solution. From a monitoring, administering, and analyzing perspective, all of the major systems management vendors have had solutions to manage large clusters of commodity machines for years now. They are still getting better, but there are a lot of choices.

The Scripting Future

PHP, Python, and Perl are still somewhat immature in terms of their enterprise libraries, and their web services capabilities are nascent. Regardless, they have the necessary ingredients to meet the requirements of the next corporate computing phase of "text pump" applications.

In addition to being free and open source, these languages are easy to learn and use. PHP, Python, and Perl are primed to follow the trail blazed by Linux and Apache and make huge inroads into the corporate market. The latest version of PHP is virtually indistinguishable from Java, to the point of almost identical syntax and keywords.

Outside of the open-source arena, Microsoft has created Zen, previously named "X#" (http://research.microsoft.com/-emeijer/ Papers/XML2003/xml2003.html), an XML-native language for its common language runtime (CLR). Visual Basic is arguably the most popular scripting language in the world, and Windows is well tuned for one- to two-processor machines. As long as Microsoft remains in the picture, developers will most likely be able to choose among .NET, Java, and PHP/Python/Perl. However, when the application is on a grid architecture, the open-source scripting languages will rule.

Expert:  Chris Parker replied 4 years ago.
I can post by 5 PM EST today. Does that work for you?

Regards,
Chris
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to the second question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Do you think grid computing could be applied to a program such as a web banking system? What about a video service like hulu or a superstore like Amazon? Are grid solutions applicable to any of these? due sunday please
Expert:  Chris Parker replied 4 years ago.
Hi!

Download my response to the second question from the following link: Click.

Please review and accept along with the first answer.

Regards,
Chris

Edited by XXXXX XXXXX on 11/7/2010 at 6:12 PM EST
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
thanks alot
Expert:  Chris Parker replied 4 years ago.
You are welcome.

-Chris
Customer: replied 4 years ago.

The article, "Building Trustworthy Software" (Hogan, 2007), discusses many software development issues. Pick one of these issues, explain its significance, and critically examine the article's discussion of it.

 

Hogan, H. (2007, July). Building trustworthy software. Control Engineering, 54(7), 78.

 

Responses should be at least 200-300 words, and should consist of at least two full paragraphs with a single line between each paragraph. (Due Thursday please.)

Customer: replied 4 years ago.

In the article, "(A Look Back at) GOTO Statement Considered Harmful" (Dijkstra, 2008), the author reprints a historic letter, originally published in 1968, that criticizes the use of the GOTO statement in computer programming. Discuss why the author objects to GOTO statements, and explain why you agree or disagree with his perspective.

 

Dijkstra, E. W. (2008, January). (A look back at) go to statement considered harmful. 51Communications of the ACM, 51(1), 7.

 

Responses should be at least 200-300 words, and should consist of at least two full paragraphs with a single line between each paragraph. (Due Friday please.)

Customer: replied 4 years ago.
Customer: replied 4 years ago.

Hogan, H. (2007, July)

 

Control system reliability today increasingly depends on software. Manufacturers are doing more to ensure reliability, but control engineers have to do their part - and perhaps change their ways. Besides more reliable and higher quality code, the bonus might be a decrease project cost, time and risk.

Eelco van der Wal, managing director of the Gorinchem, Netherlands-based industrial control organization PLCopen, is on a crusade--one that impacts applications and, hence, the part of software reliability control engineers are responsible for. His efforts, though, will only provide part of the solution.

Reliable control software requires a firm foundation, something only vendors can provide. With programs expanding from a few to thousands or more of lines of code, van der Wal sees ballooning costs and rising risks. He also notes that changes, before or after an application is deployed on a controller, are almost guaranteed.

To key to reducing the costs and the risks, according to van der Wal, is structured programming via the interface specified in the global industrial control programming standard IEC 61131-3. While the use of structured programming represents a change from the traditional approach, van der Wal says the payoff can be improved software reliability and more. When multiple projects are considered, the benefits are even greater, he says.

"If there's a certain overlap between the first project and the second project, you will see a dramatic decrease in cost and time and risk factors, and an increase in [software] quality," he says.

The cost savings vary, but van der Wal indicates 40% is a good figure to use. Thus, more reliable software may also be less costly to produce.

A look shows what control engineers can do to improve software reliability also reveals some of the tools that exist to make the job easier. In addition, discussions with controller manufacturers reveal how they're responding to coding challenges by improving their own methods for ensuring software reliability.

Function block fundamentals

While advocating a standards-based, structured programming method, van der Wal notes that the benefits aren't free and they do require a change in the traditional way of developing code. "You have to have a software development philosophy in place," he says.

For control engineers, this means adopting a structured approach. Problems have to be decomposed into individual components, which then can be handled by function blocks, which have defined inputs and known outputs.

Function blocks are designed using one of the graphical or textual languages in the IEC 61131-3 standard. Internally, they have defined variables and data types, such as integer, real, Boolean, or array. With the function block designed, it is simulated off-line. Then, when it meets requirements, it is integrated with other function blocks into the final application, which is then deployed and maintained.

It's the function blocks that give rise to the increase in reliability and the decrease in cost, says van der Wal. The former happens because the function blocks are relatively small and simple. Thus, it's possible to exercise them completely in tests and understand what exactly what they're doing. The cost reduction is a consequence of being able to reuse the finished function block in another project. Other savings result from being able to make changes more easily, and thus maintain an application for less.

A structured programming approach does require more upfront work, particularly on the first project. It also means it's not possible to dive right into a problem, no matter how tempting that is. The control problem partitioning task, however, is helped by such tools as the sequential function chart.

Model-based design tools

Mathworks, the Natick, MA, based privately held firm, has a different but conceptually similar approach for reliable control system software. Along with technical computing software, the company develops and supplies model-based design tools, which are the basis for its control software. Its model-based approach to developing control systems, like that of structured programming, requires the problem be defined and broken down into smaller steps. With that done, the company's Simulink process modeling software can create function block analogs.

Paul Barnard, Mathworks' marketing director for control design automation, explains that once a process is modeled, the software can be used for code development. "You can graphically design or describe an algorithm and through automatic code generation generate C code that then is compiled and runs usually on some type of embedded target."

National Instruments' LabView takes a similar approach. The benefit of graphical development methods is that they abstract the control problem and let a single individual developer manage more functionality. Like compilers that produce machine instructions, automatic code generation removes the human element from the process. The resulting code is more uniform than that produced by people--and might be more reliable, although that's not guaranteed.

Brett Murphy, Mathworks' technical marketing manager for verification, validation and test, notes that Mathworks is aware that automatic code generation is not infallible and provides ways to test the generated solution and the models it springs from. "We have modeling checking standards, for example," he says.

Other tools provide formal analysis of both model and code, thereby proving if it's possible to reach a particular failure mode. Such tools allow checking at the component or model level, but they can't at present be scaled up to large systems.

No machine is an island

That lack of scalability is unfortunate because applications run on platforms and often on systems that are part of a larger network. The combination brings with it an entirely new set of issues that can affect software reliability in areas that are sometimes beyond a control engineer's domain.

Roy Kok, product marketing manager for Proficy HMI/SCADA software at GE Fanuc Automation of Charlottesville, VA, notes that the platform operating system (in this case, Microsoft Windows, NT or Vista) needs to be up-to-date and, if possible, fully patched. But applying a patch before it's had a chance to be seasoned in non-critical areas can adversely impact software reliability, he says. So vendors like GE Fanuc have to supply the expertise to know when the patch, like the porridge in the fairy tale, is just right.

Likewise, other software and components can hurt an automation application - even if there's no problem initially. "Updates of ancillary software can have a direct impact on the operation and reliability of the primary solution," says Kok.

While software reliability can be improved by removing non-critical or untested extras, that's not enough, KOK continues. What's needed is a diagnostic that can spot changes in third party components and elsewhere.

For example, a production line may be down and it may be possible to bring it back up if a piece of code referencing a limit switch is bypassed. There's a tremendous incentive to make the change immediately, bypass the switch, and fix the problem software later. Kok says it's important that the temporary fix isn't allowed to slip in unnoticed, become permanent, and cause a failure some time afterward. GE Fanuc's change management system can do periodic and automatic scans of controller software, compare the results to a backup file, recognize the change, and thereby document it.

Trust, but verify

Jeffrey Harding, director of system software architecture at ABB Process Automation, similarly notes the company's 800xA software has diagnostic capabilities that check system operation and health. The product, for example, monitors installed software components and compares them to tested and known good configurations. "This allows the system to detect if an untested version of a component, including a third party component, has been installed," says Harding.

Eric Kaczor, product marketing for engineering software products at Siemens Engineering and Automation, notes that end users are confronted by many different pieces of software. That situation makes version control and version compatibility difficult to achieve and manage. One technique that's used freezes time--at least as far as software is concerned--to reduce the problem. Thus, Siemens will offer a single "golden DVD" that contains software needed by users for its equipment. Kaczor says that the software in such a setup will lag the most current versions somewhat, but the combination offers something the most recent software might not be able to match. "It's all guaranteed to work together."

For its part, Kaczor says that Siemens tries to ensure that software as-shipped is compatible with any combination of all released pieces of hardware. The company does this through extensive regression testing, trying each new prospective software release on rooms full of equipment. Such a check is thorough, but not 100% exhaustive, because it's difficult and perhaps impossible to do verification against all third party software.

Opto 22 is a Temeucula, CA-based, hardware maker of I/O, controllers, and software. Like others, the company does regression testing before releasing software to customers. Roger Herrscher, senior engineer in the Opto 22 quality assurance group, says the company recently reorganized its QA group to better test its software against the range of possible hardware. Previously, the QA team had been composed of engineers dedicated solely to product testing. Now Opto 22 has brought software and hardware designers into the group. The effort increases detailed product knowledge, improves testing, and ultimately quality.

Such efforts come with a caveat with regards XXXXX XXXXX because of its nature: "When using software, there's almost always more than one way to accomplish a given task," notes Herrscher.

A natural tendency, he says, is for the designer of software to have a preferred way to do things. Thus, the designer might not consider and exercise all the avenues a user might follow. By having someone else design the test function, that problem can be circumvented, Herrscher says.

Like other suppliers, Schneider Electric of Rueil Malmaison, France ,also does regression testing with every release of its Unity Pro software package. In particular, Rich Hutton, automation product manager for Schneider Electric, notes that the communications layer that handles traffic is key to reliability and consistent operation. He notes that diagnostics can catch memory failures and other problems, but adding them takes resources. If done to an extreme, diagnostics can actually impact the reliability of the main application, he says.

While Schneider Electric works to make its software reliable, the company is also taking steps to ensure that the programming done by end users to create a desired application is also reliable. Hutton notes that the company's development tool supports five IEC 61131-3 languages in which to program. This is done in part because it helps Schneider Electric's customers do their part to ensure software reliability.

A choice of standard programming languages "improves the reliability of the designers, because they can utilize a standard programming style," says Hutton. "That makes their code writing usually more reliable."

Customer: replied 4 years ago.
Expert:  Chris Parker replied 4 years ago.
The link to the Dijkstra article is not working. Could you please fix it?

Regards,
Chris
Customer: replied 4 years ago.
Expert:  Chris Parker replied 4 years ago.
Thanks.

Download my response to the first question from the following link: Click.

Please review and accept.

Regards,
Chris
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.

what specific types of actions can developers of control software take to improve their output quality? Do you think that the area of control software has any drastically differant requirements of developers than other areas? due on Friday please.

Customer: replied 4 years ago.
is work ready please?
Expert:  Chris Parker replied 4 years ago.
I am sorry for the delay, but I haven't been well from the past two days. Is it ok if I complete the two questions first thing tomorrow morning?

Regards,
Chris
Customer: replied 4 years ago.
yes indeed
Expert:  Chris Parker replied 4 years ago.
Thanks.

Download the first answer from the following link: Click.

Please review and accept.

Regarding the second question, what kind of control software is it about?

Regards,
Chris

Edited by XXXXX XXXXX on 11/13/2010 at 12:38 PM EST
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
it did not say what kind so any would be fine please.
Expert:  Chris Parker replied 4 years ago.
THIS ANSWER IS LOCKED!
You can view this answer by clicking here to Register or Login and paying $3.
If you've already paid for this answer, simply Login.
Chris Parker, Problem Solver
Category: Homework
Satisfied Customers: 2226
Experience: Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.
Chris Parker and other Homework Specialists are ready to help you
Customer: replied 4 years ago.
Thanks
Expert:  Chris Parker replied 4 years ago.
You are welcome.

This thread has become very long and is taking some time to open. When you have new questions for me, could you please start a new thread and put "For XXXXX XXXXX" in the first line?

Thanks,
Chris
Customer: replied 4 years ago.

o Write a simple algorithm in pseudocode that lists the program's input, output, and processing components in a logical, sequential order. At this stage, do not show the tasks and subtasks within each component.

o Document the purpose of each module (component).

o Identify the variables that are needed in the program. For each variable, provide the following:

 

  • A name
  • Its data type
  • A description of its purpose
  • due ASAP Please. I would make a new post tread for the rest of the work that I'll be sending to you please. Thanks for all your help.
Expert:  Chris Parker replied 4 years ago.
I can answer this first thing tomorrow morning. Does that work for you?

Regards,
Chris
Customer: replied 4 years ago.

Don't worry about it. Thanks anyways

JustAnswer in the News:

 
 
 
Ask-a-doc Web sites: If you've got a quick question, you can try to get an answer from sites that say they have various specialists on hand to give quick answers... Justanswer.com.
JustAnswer.com...has seen a spike since October in legal questions from readers about layoffs, unemployment and severance.
Web sites like justanswer.com/legal
...leave nothing to chance.
Traffic on JustAnswer rose 14 percent...and had nearly 400,000 page views in 30 days...inquiries related to stress, high blood pressure, drinking and heart pain jumped 33 percent.
Tory Johnson, GMA Workplace Contributor, discusses work-from-home jobs, such as JustAnswer in which verified Experts answer people’s questions.
I will tell you that...the things you have to go through to be an Expert are quite rigorous.
 
 
 

What Customers are Saying:

 
 
 
  • Wonderful service, prompt, efficient, and accurate. Couldn't have asked for more. I cannot thank you enough for your help. Mary C. Freshfield, Liverpool, UK
< Last | Next >
  • Wonderful service, prompt, efficient, and accurate. Couldn't have asked for more. I cannot thank you enough for your help. Mary C. Freshfield, Liverpool, UK
  • This expert is wonderful. They truly know what they are talking about, and they actually care about you. They really helped put my nerves at ease. Thank you so much!!!! Alex Los Angeles, CA
  • Thank you for all your help. It is nice to know that this service is here for people like myself, who need answers fast and are not sure who to consult. GP Hesperia, CA
  • I couldn't be more satisfied! This is the site I will always come to when I need a second opinion. Justin Kernersville, NC
  • Just let me say that this encounter has been entirely professional and most helpful. I liked that I could ask additional questions and get answered in a very short turn around. Esther Woodstock, NY
  • Thank you so much for taking your time and knowledge to support my concerns. Not only did you answer my questions, you even took it a step further with replying with more pertinent information I needed to know. Robin Elkton, Maryland
  • He answered my question promptly and gave me accurate, detailed information. If all of your experts are half as good, you have a great thing going here. Diane Dallas, TX
 
 
 

Meet The Experts:

 
 
 
  • Manal Elkhoshkhany

    Tutor

    Satisfied Customers:

    4522
    More than 5000 online tutoring sessions.
< Last | Next >
  • http://ww2.justanswer.com/uploads/BU/BusinessTutor/2012-2-2_115741_Kouki2.64x64.jpg Manal Elkhoshkhany's Avatar

    Manal Elkhoshkhany

    Tutor

    Satisfied Customers:

    4522
    More than 5000 online tutoring sessions.
  • http://ww2.justanswer.com/uploads/ComputersGuru/2010-02-13_051118_Photo41.JPG LogicPro's Avatar

    LogicPro

    Engineer

    Satisfied Customers:

    3458
    Expert in Java C++ C C# VB Javascript Design SQL HTML
  • http://ww2.justanswer.com/uploads/LI/lindaus/2012-6-10_04811_IMG20120609164157.64x64.jpg Linda_us's Avatar

    Linda_us

    Finance, Accounts & Homework Tutor

    Satisfied Customers:

    3124
    Post Graduate Diploma in Management (MBA)
  • http://ww2.justanswer.com/uploads/chooser77/2009-08-18_162025_Chris.jpg Chris M.'s Avatar

    Chris M.

    M.S.W. Social Work

    Satisfied Customers:

    2385
    Master's Degree, strong math and writing skills, experience in one-on-one tutoring (college English)
  • http://ww2.justanswer.com/uploads/JawaadAhmed/2009-6-27_12137_SIs_SHadi.jpg F. Naz's Avatar

    F. Naz

    Chartered Accountant

    Satisfied Customers:

    1988
    Experience with chartered accountancy
  • http://ww2.justanswer.com/uploads/JK/jkcpa/2011-1-16_182614_jkcpa.64x64.jpg Bizhelp's Avatar

    Bizhelp

    CPA

    Satisfied Customers:

    1876
    Bachelors Degree and CPA with Accounting work experience
  • http://ww2.justanswer.com/uploads/avremote/photoa.jpg Seanna's Avatar

    Seanna

    Tutor

    Satisfied Customers:

    1781
    3,000+ satisfied customers, all topics, A+ work
 
 
 
Chat Now With A Tutor
Chris Parker
Chris Parker
1948 Satisfied Customers
Master of Computer Applications (MCA). BSc in Mathematics, Physics, and Computer Science.