HW4: Chapters 11 & 12

11.4. What is the common characteristic of all architectural styles that are geared to supporting software fault tolerance?

Diversity. This is due to the logic that a redundant/diverse system will be able to survive one portion failing, by having a fallback which can take the same input without failing in the same way. These are often designed with multiple layers or types of structure behind the diversity, such as n-version, three-channel, etc. Diversity is the best way (after writing good code) to improve a software’s ability to function reliably despite faults.

11.7. It has been suggested that the control software for a radiation therapy machine, used to treat patients with cancer, should be implemented using N-version programming. Comment on whether or not you think this is a good suggestion.

N-versioning essentially means that there is some number N such that that number of completely independent versions of the target specification are created, then run concurrently, with an overseer which determines which one is most correct on output. The upsides of such a design is that any individual failure on any individual version can be ignored or outvoted by the other versions. Also, since they are generally diverse, it is unlikely that they all fail in the exact same manner, reducing the likelihood of having a complete failure.

However, the downsides are that the resources for development need to be split among parallel development teams; 3 bad systems doing the same thing as 1 good system could mean that the correct result gets outvoted, rather than the other way around. Similarly, 3 good systems simply means that 1/3 the total progress on the project has been achieved, and the hardware costs to support the triply developed software means that costs stay higher in the long run.

This type of system is essential for softwares which can harm significant numbers of people on failure; airplanes and nuclear power plants being a perfect example. However, a radiation therapy machine may or may not be at the same scale. One could argue that, if a large number of hospitals buy the machine, a large number of patients utilize the machine before a fault is discovered, and the fault leads to death, then it would globally cause more harm than a single airplane failure, which generally only has to crash once or twice before the product line is pulled and fixed. From this perspective, the radiation therapy machine absolutely requires n-version programming – moreso than ‘more visible’ systems like airplanes.

11.9 Explain why you should explicitly handle all exceptions in a system that is intended to have a high level of availability.

Exceptions are the most easily quantified and fixed of bugs; they have custom error messages in most languages, and are easily detected and fixed for that very reason. Because of this, there is basically no excuse for not handling exceptions properly. They can make recovering from certain faults (especially data type faults) extremely easy, and explicitly defining the course of action to the software for each type of fault in each possible context is a very solid manner of making your software resistant to common issues.

12.5. A train protection system automatically applies the brakes of a train if the speed limit for a segment of track is exceeded, or if the train enters a track segment that is currently signaled with a red light (i.e., the segment should not be entered). There are two critical-safety requirements for this train protection system:

1: The train shall not enter a segment of track that is signaled with a red light.

2: The train shall not exceed the specified speed limit for a section of track.

Assuming that the signal status and the speed limit for the track segment are transmitted to on-board software on the train before it enters the track segment, propose five possible functional system requirements for the onboard software that may be generated from the system safety requirements.

  1. The system must have redundant sensors and diverse signal interpretation, so that an error cannot prevent the train from receiving the speed/signal status
  2. The system must have some detection (sensors + analysis) for track conditions, in case of ice/snow/rain which would lower the safe speed of train operation below the local speed limit
  3. The system needs safe, reliable default values in case of being unable to receive the speed limit of the next track segment.
  4. The system needs to stop the train (assume red light) in case of being unable to receive the ‘red light’ status of the next track segment.
  5. the system needs access to emergency brakes and sensors in place to engage them in order to prevent a head-on collision in case of detecting something on the tracks
  6. the system needs to handle speed changes such that when raising the speed limit, the train does not accelerate before crossing into the new segment; but when lowering the speed limit, the train has decelerated to the new limit before crossing into the new segment.
  7. the system needs tracking of the status of hardware (especially battery, fuel, engine functions, brake functions, sensors, etc.); it must alert management when components are worn down or when it has reached a recommended maintenance stage, and give severe warnings (including refusing to embark) in case of essential components not responding to the system/ responding as broken.
  8. the system needs a method of communicating with the outside world in order to give live updating of any issues that may occur;
  9. it should also be able to receive an ’emergency stop’ command from outside (including from other trains)
  10. if the train must stop for any reason, it must broadcast an ’emergency stop’ command to the nearby track to prevent other trains from potentially crashing into it
  11. the system should have verbose logging in case of audit / error

HW3: Chapter 10

10.6 Explain why it is reasonable to assume that the use of dependable processes will lead to the creation of dependable software.

The dependable process ensures the creation of code which is reliable, resistant to failure, and stable by ensuring that a number of subqualities are achieved; each with concrete goals and positive outcomes. While it is a specialized process, it is one that is well suited to the title ‘dependable’.

Auditable – something all code should absolutely be; When a system is making judgments that can drastically alter a person’s life, such as a facial recognition system used on security footage of a crime scene, or a heart monitor determining if the heart has stopped or not; it is absolutely important to be able to clearly determine if that code is done correctly by as many individuals, from as many backgrounds, as possible. One can not simply ‘be confident’ in systems that are important enough.

Diverse – even within this chapter, diverse code is argued to be unnecessary if the standard logic is sufficiently tested and vetted; nonetheless, if your software requires redundancies, those redundancies should not fail in the exact same way that the main cycle failed; otherwise, what’s the point in having redundancies at all? Diversity is the only way to implement redundancy properly, so it belongs here, because it makes code, not more efficient, but more dependable. Which is the whole point of this process, after all.

Documentable – All code should be documented. Better documentation means easier to use correctly, and easier to read/modify the code for as needed. Documentability just makes making good documentation itself easier; so it’s a good trait.

Robust – A recoverable error should allow the software to keep going? Sounds like excellent software design, especially for a software that one must depend upon to work under all conditions.

Standardized – why reinvent the wheel (and force yourself to reinvent every wheel afterwards), when you want it to work reliably? Innovation is for cutting-edge systems and hacky scripts, not dependable software.

In summary, all of these traits make code more dependable, and as such, are well suited to belong in a feature list of a ‘dependable’ process. Code made using the dependable process would almost certainly be more dependable than code not made using the dependable process. Therefore, the dependable process makes dependable code.

Other 10.6 A multimedia virtual museum system offering virtual experiences of ancient Greece is to be developed for a consortium of European museums. The system should provide users with the facility to view 3-D models of ancient Greece through a standard web browser and should also support an immersive virtual reality experience. What political and organizational difficulties might arise when the system is installed in the museums that make up the consortium?

Such a system sounds primarily to involve issues regarding policy, and technical specifications. Politics and organization are much harder to visualize. However, museums rely on visitors, so in that manner, a virtual reality system might threaten their primary source of income. Meanwhile, doing it poorly might harm them even more. So they must make a good one, which requires resources, but museums are not extremely wealthy, so this is a no-win scenario. The only upside to this is to share art with the people who cannot make it to the museum, for the pure purpose of education. However, this might cause the museum to rely more heavily on public funding in order to stay afloat. Inversely, such a technology might increase interest in the member museums, making tourism rise. The political issues, therefore, might be those of conflicting forces wanting to minimize risks, maximize educational opportunities, minimize costs, and potentially want to properly invest in a future income opportunity. Since any angle could be effectively argued, and because museums are frequently the beneficiaries of public funding, politics will most definitely come into play when determining policies that would directly affect the availability or feature set of the virtual reality experience.

Another potential ‘political’ issue would be if an artist featured in the museum doesn’t want to be featured in the virtual reality museum; because software and physical museums are under different regulations, they might have that right; straining the organizational parameters of the project (display this but not that, somehow still make the museum not look empty in the virtual space) due to the impact of a political situation.

Organizationally, the software needs to be usable in both web-based and vr systems. These are two extremely different platforms, so much so that two completely different softwares under the same name might be the only sane route for attempting this. They will need different levels of details, or even completely different types of access (web-based being, potentially, more web-cam style, whereas the vr system could use a virtual museum with high resolution 3-d scans of the artwork). These are primarily technical and design issues, however. Organization would mostly involve keeping the two projects (vr and web) aligned design and feature-wise.

10.10 It has been suggested that the need for regulation inhibits innovation and that regulators force the use of older methods of systems development that have been used on other systems. Discuss whether or not you think this is true and the desirability of regulators imposing their views on what methods should be used.

Regulation is a necessary part of becoming a valid industry; we shouldn’t be the wild west when our systems are used in life-saving or potentially life-ending technologies. Ultimately, any issue with the regulation process requires reform within that process.

Leaving any industry to self-regulation has time and time again proven to be ineffective and unacceptable. It is simply too tempting to cut a corner that ‘no one will see’, especially when business people are making the decisions; someone can always be found to do a dirty job, and innocents (and customers) are the first to be hurt.

It doesn’t really matter how behind the times a regulator’s practices are; they’re in place because an expert placed them there; it could be for a very good reason. And even if they aren’t, at least the baseline exists at all.

Other 10.10 You are an engineer involved in the development of a financial system. During installation, you discover that this system will make a significant number of people redundant. The people in the environment deny you access to essential information to complete the system installation. To what extent should you, as a systems engineer, become involved in this situation? Is it your professional responsibility to complete the installation as contracted? Should you simply abandon the work until the procuring organization has sorted out the problem?

Jobs being made redundant to automation is an unavoidable result of this line of work. Some might even say a goal. Hopefully, those individuals can be reallocated to new positions which have been made possible due to the money saved in their own (former) department. It should be the imperative of employers (and governments) to reinvest funds saved by proper application of technology, not just for moral grounds, but to avoid wasting those resources entirely. CEOs don’t need a pat on the back, a windfall bonus check, and a cookie for managing to make their company up to date technologically; they need to continue to move forwards.

Ultimately this is a systematic issue. We are inevitably automating away every job; even our own. A global long-term solution needs to be found; but progress is progress. Coal mining is no longer a job that anyone should be in; it’s a dead industry, and we shouldn’t artificially keep it alive when doing so costs us investing in future-proof solutions (such as solar), which provide a greater net positive for humanity and create jobs in new places. In the same way, we must look forwards, rather than cling to the past.

However, being denied the resources necessary to complete a job might mean that your employer is unworkable. Put it in writing (in no uncertain terms) how essential their cooperation is for the project to move forward. Document everything. Raise alarm with the bosses. If nothing can be done, make it clear the the project will have to be cancelled. There should be stipulations in your contract for such a contingency. Hopefully it won’t come to that, but ultimately if you cannot complete your job properly, you cannot complete your job.

HW2: Responses

A response to Silver Bullet, Cherry Picking, and Google Code Repo:

No Silver Bullet seems to believe that large-scale coding will always be difficult, and that there are no easy solutions to this. It’s pretty outdated, although while version-control and automated testing have changed, many of the fundamentals of software engineering haven’t. Nonetheless, it seems to have been almost completely disproved by first git, then Google’s versioning control system, in which a massively automated super-git-like-repository allows users to add code, test it, and incorporate it into the main program structure with extreme ease; all despite the ridiculous scale of their combined projects. Perhaps there is a silver bullet after all.

Cherry Picking, meanwhile, speaks of how difficult it can be to determine what code should be taken from a branch, and why; while also giving some recommendations on how to effectively find (and fix) bugs in a responsible, scientific, and documentation-heavy manner. It brings up many valid points about how tricky excessive branches can end up being. Again, this lends even more credibility to Google’s own no-branch system.

Finally, Google Code Repo, which really just made me super envious of having such a talented and dedicated support team to create and maintain such a glorious beast of a version control system. It’s pretty spectacular, and really makes me appreciate just how much Google has legitimately achieved despite (or perhaps because of) their occasional ruthless business move.

Really, all three articles just make me want to use Google’s system for… everything? Maybe someday we’ll get enough people to leave Google over ethical issues that they can form their own version control company, and make a rival to git.

HW1: Chapter 1

Ex 1.3, 1.8, 1.9, 1.10

Global 1.3: Briefly discuss why it is usually cheaper in the long run to use software engineering methods and techniques for software systems.

Because Software Engineering methods are thorough, they increase the initial cost/time/effort when creating a new project. However, the same methods lead to a stronger foundation, making software which is easier to support, modify, or expand upon; more approachable for new programmers to come in and be able to understand due to a concrete design process having been used; more accessible to users (or auditors) due to the extensive documentation; and generally more thoroughly tested. While it is possible (especially in smaller projects) for a rogue software to reach all of the targets (acceptability, dependability, security, efficiency, maintainability), software engineering does a lot to ensure and standardize reaching each of these goals, and is essential for larger projects where no amount of ‘good practice’ can outweigh sheer size and complexity.

Proper 1.3: What are the four important attributes that all professional software should possess? Suggest four other attributes that may sometimes be significant.

The “figure 1.2” attributes are Acceptability, Dependability (and security), Efficiency, and Maintainability. For my 4, I would formally declare Security independently, as well as add Readability, Reusability (as a component in future projects), and Moddability (flexibility to have new features added). All of these (with the exception of Security), could be defined as a subset of the first 4 core attributes.

Global 1.8: Noncertified individuals are still allowed to practice software engineering. Discuss some of the possible drawbacks of this.

Misapplication of Software Engineering processes are, at best, ineffective for what they’re designed for, and a waste of resources. However, at worst, they can be downright counterproductive, create unnecessary confusion, or make otherwise simple code extremely complex as the confused programmer attempts to shoehorn in practices where they do not belong.

Proper 1.8: Discuss whether professional engineers should be licensed in the same way as doctors or lawyers.

Licensing is beneficial because it

  • Can be used to improve one’s resume
  • Represents a useful and non-standard skillset
  • Can be done incorrectly by a non-certified individual
  • Is important enough to be done by a certified individual

Licensing ‘in the same way as doctors and lawyers’ is ridiculous, however, because medical school and law school both take wasteful amounts of time and money to complete; this is an important skill, but not one that requires a doctorate to be able to practice it. A certification system, therefore, seems like a much better fit.

1.9: For each of the clauses in the ACM/IEEE Code of Ethics shown in Figure 1.4, propose an appropriate example that illustrates that clause.

Act in the Public Interest: If a software is in a high-stakes field (e.g. medical), one must prioritize testing to ensure that there are no failure states for the code; If management wants to release an incomplete product, one should engage in whatever is necessary (within legality) to stop it from being released irresponsibly. An example would be a heart monitor software that has a chance to crash. If management tries to release it, raise the issue (with documentation of your efforts), first to management, then to their bosses, then to legal team, then to the ceo, and finally to news; such a situation can not be allowed to ‘sneak into’ the industry and kill real world people due to unchecked corporate greed.

Client and Employer: One should always engage in good faith, and work toward what the client’s goals are, not what they should be. However, this does not excuse working on an unethical project. An example would be if you work at a facial recognition company who decides to start selling their software to ICE. That’s unethical, and you shouldn’t work there.

Product Professional Standards: One should never release code which is incomplete or not up to personal quality expectations. An example would be a project that is not life-threatening to release early, but still definitely not ready; don’t let management push it out the door without at least some documentation of your disagreement, but don’t take it as far as a public issue.

Integrity of Judgement: One should attempt to retain scientific detachment, especially when testing/bugfixing code, or hearing feedback; the software comes first, not one’s own emotions. An example is if you’re trying to get feedback on your prototype: you have to be detatched enough to listen when a user insults every single element of your design, no matter how attached you are to it, because it’s valid. Some of that feedback might not be actionable; some of it will: your job is to improve what you can, and remember the rest for the next project. There is always room for improvement.

Ethical Management: Managers should be ethical; if they aren’t, you probably don’t want to work for them. As an example, if your manager doesn’t listen in an ethical issue scenario (e.g. the public issue at the top of this list), you definitely don’t want to be working under them. They’ll probably try to blame it on you if something goes wrong anyways, so do keep documentation of everything.

Advance the Profession: It’s your job to make your job look good. (To me, this one is more a ‘strategy for success’, less an ‘ethics’ item. It’s odd to include it here.) For example… if your boss asks what you’ve been doing, try to represent the industry well: work hard, have a lot to show, and make sure it improves on the actual business at hand, rather than just offers an alternative.

Colleagues: Be nice to them. For example, if you have a colleague, be polite to them, and listen to their suggestions.

Self: work to always improve, while retaining ethics. For example, nearly every year, something is released that fundamentally changes (and improves) some process of software development, website design, or programming in general. Keeping up with new frameworks is a full time job, but it keeps you relevant in this ridiculously quickly evolving industry.

Global 1.10: The “Drone Revolution” is currently being debated and discussed all over the world. Drones are unmanned flying machines that are built and equipped with various kinds of software systems that allow them to see, hear, and act. Discuss some of the societal challenges of building such kinds of systems.

Drones are a simple example of a technology which the current laws and rules of society simply weren’t designed for. When scaled to their logical conclusion, we’re facing a perfect information system; a surveillance state of unprecedented power, accessibility, and affordability. Ignoring the issues with governments having access to this kind of information – what of neighbors? Corporations?

Privacy becomes unenforceable, and mapping out an individual’s activities, location, or patterns is the first step in any targeted malicious act. Imaging a robbing crew that knows where everyone is at all times: perfect stealth, without having to post a watch? Nah, that’s small-time stuff. Let’s go bigger. Scale it to a national level; automate the data collection, and put some hefty data science to analyze it all in real time. Unified databases with complete activity tracking of the entire population.

Then look at the technologies that such information enables. Try an electronic billboard that displays individually targeted advertisements to people as they go by; ads that are tailored by tracking their actual physical responses to viewing them: a sufficiently advanced system down that path could effectively control an individual, not with a single advertisement, but with a sequence of clever manipulations that lead, inevitably, to a product…. or a candidate. And what use is democracy, if the population itself is compromised?

Drones themselves might be merely the beginning; but they are the beginning of something that will be very difficult to put back into the bottle.

Nonetheless, progress cannot be stopped; we must engage carefully (and with regulation as needed), in order to explore the possible positive futures, while safeguarding against the negative ones. It’s a scary world out there, but any technology can have a positive impact if done right; drones (and perfect information systems) included.

Proper 1.10: To help counter terrorism, many countries are planning or have developed computer  systems that track large numbers of their citizens and their actions. Clearly, this has privacy implications. Discuss the ethics of working on the development of this type of system.

The ethics of such a system are complex to say the least. On the one hand, a technologically feasible system which can save lives could be construed as ‘killing people by not being made’. If a global catastrophe would have been preventable with such technology, but wasn’t, then that falls on the ethical heads of anyone who opposes it.

On the other hand, creating a perfect information system has more than a few misuses. When ‘a government’ has access to that kind of power, what it really means is that ‘the people who work in government’ have access to that power; and people are people. People are selfish, greedy, have prejudices, are susceptible to religious fervor, and similarly susceptible to manipulation. Systems are hackable. Someone getting access to the wrong information could allow horrible crimes to be committed; somebody modifying the data within a ‘perfect’ system could allow a hacker to make anyone they want to a target of world governments. Too consolidated an information system, without the social structures in place to limit and manage it’s potential, does far more harm than good.

For now, I think that we are not ready for a system such as this. But the upsides of one done correctly are meaningful, and I would like Humanity to move towards being ready for such applications of technology. After all, they’re coming whether we’re ready or not, so we have to try to do it right the first time before it’s too late.

HW0: Introduction

A surprise, to be sure, but a welcome one.

I am Jack Fraser, a Senior in Computer Science at the College of Charleston. This semester (Fall 2019) will see many posts to this blog for assignments for this class, focusing primarily on Software Engineering.

My hobbies include trying to utilize WSL for intended-for-full-linux applications, making .exes in Python, and utilizing vscode religiously. I have an irrational dislike of Java, and pretend to like C despite actually being rather scared of it. Nonetheless, my next target language to learn is c++, especially when accounting for its versatility and power when it comes to user interfaces. The Visual Studio XAML Designer is just too tempting.

I hope you enjoy the blog!

Design a site like this with WordPress.com
Get started