Отсутствует

Диалог во времени. Жизнь и творчество А.П. Чехова


Скачать книгу

Score

      12. What information should you gather?

      <--- Score

      13. What is a worst-case scenario for losses?

      <--- Score

      14. How do you gather Software reliability requirements?

      <--- Score

      15. Have all of the relationships been defined properly?

      <--- Score

      16. What Software reliability requirements should be gathered?

      <--- Score

      17. How do you catch Software reliability definition inconsistencies?

      <--- Score

      18. How would you define Software reliability leadership?

      <--- Score

      19. How do you think the partners involved in Software reliability would have defined success?

      <--- Score

      20. Are different versions of process maps needed to account for the different types of inputs?

      <--- Score

      21. Is the current ‘as is’ process being followed? If not, what are the discrepancies?

      <--- Score

      22. Are all requirements met?

      <--- Score

      23. Is there a completed, verified, and validated high-level ‘as is’ (not ‘should be’ or ‘could be’) stakeholder process map?

      <--- Score

      24. Who defines (or who defined) the rules and roles?

      <--- Score

      25. Is it clearly defined in and to your organization what you do?

      <--- Score

      26. How do you hand over Software reliability context?

      <--- Score

      27. Is the work to date meeting requirements?

      <--- Score

      28. What are the tasks and definitions?

      <--- Score

      29. Do you all define Software reliability in the same way?

      <--- Score

      30. Have the customer needs been translated into specific, measurable requirements? How?

      <--- Score

      31. Are there different segments of customers?

      <--- Score

      32. Are the Software reliability requirements complete?

      <--- Score

      33. What sources do you use to gather information for a Software reliability study?

      <--- Score

      34. Are task requirements clearly defined?

      <--- Score

      35. What are the Software reliability tasks and definitions?

      <--- Score

      36. What baselines are required to be defined and managed?

      <--- Score

      37. Do you have a Software reliability success story or case study ready to tell and share?

      <--- Score

      38. What are the record-keeping requirements of Software reliability activities?

      <--- Score

      39. Have specific policy objectives been defined?

      <--- Score

      40. Who approved the Software reliability scope?

      <--- Score

      41. Is there any additional Software reliability definition of success?

      <--- Score

      42. What are the Software reliability use cases?

      <--- Score

      43. How and when will the baselines be defined?

      <--- Score

      44. Have all basic functions of Software reliability been defined?

      <--- Score

      45. How do you manage changes in Software reliability requirements?

      <--- Score

      46. What Software reliability services do you require?

      <--- Score

      47. What key stakeholder process output measure(s) does Software reliability leverage and how?

      <--- Score

      48. What is in scope?

      <--- Score

      49. How do you gather requirements?

      <--- Score

      50. If substitutes have been appointed, have they been briefed on the Software reliability goals and received regular communications as to the progress to date?

      <--- Score

      51. Do you have organizational privacy requirements?

      <--- Score

      52. Is data collected and displayed to better understand customer(s) critical needs and requirements.

      <--- Score

      53. Does the scope remain the same?

      <--- Score

      54. Are audit criteria, scope, frequency and methods defined?

      <--- Score

      55. The political context: who holds power?

      <--- Score

      56. Who is gathering Software reliability information?

      <--- Score

      57. How are consistent Software reliability definitions important?

      <--- Score

      58. What happens if Software reliability’s scope changes?

      <--- Score

      59. How will the Software reliability team and the group measure complete success of Software reliability?

      <--- Score

      60. Are customer(s) identified and segmented according to their different needs and requirements?

      <--- Score

      61. Is there a Software reliability management charter, including stakeholder case, problem and goal statements, scope, milestones, roles and responsibilities, communication plan?

      <--- Score

      62. What customer feedback methods were used to solicit their input?

      <--- Score

      63. Has a project plan, Gantt chart, or similar been developed/completed?

      <--- Score

      64. What are the rough order estimates on cost savings/opportunities that Software reliability brings?

      <--- Score

      65. Is Software reliability currently on schedule according to the plan?

      <--- Score

      66. Has the Software reliability work been fairly and/or equitably divided and delegated among team members who are qualified and capable to perform the work? Has everyone contributed?

      <--- Score

      67. Is the scope of Software reliability defined?

      <---