Automatic editing

For changing file contents, we often use editors.

But sometimes it is a good idea to do this from a script and do a change to many files at once or do the same kind of changes often.

The classic approach to this is using sed, which is made exactly for this purpose.

But most scripting languages, like for example Perl can do similar work to sed. Since Perl was developed with the idea of replacing awk and sed and some shell scripting and being a complete programming language, it claims to do the same better than sed. Ruby, Python, Raku and some others are also possible to use.

Basically this can be done in typical bash style work, so the bash script uses specific tools to perform small tasks and together an end result is achieved.

I am quite familiar with Perl, so for this kind of replace-in-file I am using Perl, just in a way that would with some minor changes also work with the other four languages mentioned or probably some others. Of course for more advanced stuff there might be areas where it for example Perl or Raku work better.

One thing needs to be thought about:

These „scripted search and replace“ operations primarily work like pipes, so with something like

perl -p -e 's/ä/ae/g;s/ö/oe/g;' < infile > outfile

or the equivalent in sed, Python, Raku or Ruby. In the end it is probably desired that the outfile just replaces the infile, but

perl -p -e 's/ä/ae/g;s/ö/oe/g;' < infile > infile

won’t work.

So an universally applicable approach is to just mv the modified file to the original afterwards, like

perl -p -e 's/ä/ae/g;s/ö/oe/g;' < infile > outfile \
&& mv outfile infile

Sometimes it is desirable to retain the original file to be able to check if the transformation did what it was meant to do and to be able to go back…

mv infile infile~
perl -p -e 's/ä/ae/g;s/ö/oe/g;' < infile~ > infile

And then the files ending with ~ need to be cleaned up when everything is done.

Since this is the most common way to work, there is a shortcut (at least for Perl):

perl -p -i~ -e 's/ä/ae/g;s/ö/oe/g;' infile

If no backup is needed, this can be done as

perl -p -i -e 's/ä/ae/g;s/ö/oe/g;' infile

For these simple examples, using sed, ruby, perl, python or raku is more a question of what syntax we know by heart. And of course, Perl, Python and SED are most likely installed on almost any Linux server…

Sometimes we want to do more complex things.. So depend on a context or on a state and do different substitutions depending on that. Or aggregate information in variables and insert them at some point.. For more complex things I really prefer real programming languages like Perl or Ruby, but a lot can be done by using just bash, sed and awk, if you know them really well.

A funny thing is that there is a simple editor ed, which can actually be used from the command line to apply editing functions to a file.

I remember that in some project there were a lot of template files that where used to create emails for example. It was quite a good idea to change them with perl scripts (and retain the original). Then I could show the outcome to the stakeholders and change the scripts until it was ok. This works for a couple of hundred files without problems, but of course it works better if they are kept clean with no unnecessary differences sneaking in that cause automatic editing to fail or create different results on some of the files. Testing remains important, of course.

Better learn to do at least some basic automated editing than doing tedious manual editing of a couple of hundred files, possibly having to do it more than once.

Share Button

Meetings with some members remote

Almost all meeting rooms have these technically excellent Jabra devices to do a meeting and let someone participate who is not in the meeting room.

What happens during these meetings more often than not:

The remote participant or the remote participants are forgotten…

While it may be a useful approach in some circumstances, think twice if it is not better just to do the meeting with headphones for everyone, even though a lot of people are actually sitting in the same office location.

Share Button

vim and vi

We all have our preferences for editors and IDEs.

I like Emacs and IntelliJ.

But I also like vi.

When you work on Linux-Servers that are not your own, there is a certain, quite limited set of software installed, that makes this thing work.

Usually you will find Perl and Python. And vi as editor.

Of course it is possible to install emacs or some people like „tiny“ editors like nano, pico, joe, jed, kedit, gedit,… And then again, you need to install them. Not on one server, but on a lot of servers. Where they serve no purpose, other than allowing people who are not comfortable with vi to edit.

So I guess, it is ok, if there is a number of servers belonging to a team if the team agrees to use an additional editor.

But generally, it can be assumed that this is not the case or not there on every server.

So we need to learn vi or vim.

Vim is an improved version of vi, which is what Linux system today have as vi. Usually. And it is less and less common that Aix, Solaris or other Unices are still in place, it is more and more Linux only, with vim as the default editor that is always there. And it works without graphical user interface, which is a big plus. Servers can and often do have X11 libraries installed, so it is possible to ssh with -X option to allow X11 forwarding. But this is a little bit to set up on both sides, which can be some work. And which can fail because of a missing part. And changing the server setup is not always desirable. And not always efficient, if it is a huge number of servers. So not needing X11 can be a plus and makes it universal.

So what is annoying in the beginning: The editor has different modes. Text can be entered in insert mode. Then this ends and keys work as commands. So some effort is needed to understand the basics, while some editors are self explanatory and work like „normal editors“ from the beginning. But just learn it.

On the other hand, vi and even much more so vim is a very powerful editor. It has a lot of functionality and for people who really know it well, it can be very fast. As with Emacs it IS possible to configure vim to a great extent. Damian Conway has written macros for vim that allow it to work as a replacement for powerpoint for presentation that are very code centric. With the limitations of what text mode provides, but it is still amazing.

This is not the way to go on thousands of Linux servers, because it is not efficient to distribute the configuration and maybe not desirable.

So I work with Emacs with my configuration. I am totally lost with Emacs with the default configuration.

And I work with vi with the default configuration and deliberately avoid adding my personal configuration.

Since Emacs, Vi and IntelliJ are different enough and look different enough, it is not such a danger to mix them up.

Share Button

There is more than one way to do it

Quite often we need to do something (system engineering, software development, database,…) and when we let ten people do it, it will be done in eleven different ways.

Some organization like to „standardize“ things…

And to some extent that is needed actually.

But the challenge is to find what needs to be standardized and what not.

While it is absolutely clear that it will lead to chaos, if nothing is standardized, because stuff will not work together well and people will have a hard time taking over somebody else’s work or helping each other.

But the other extreme can be bad also… It kills creativity, it forces people to use inferior solutions that need a long time to implement. Just think of standardizing on hammers and then using screws as nails. Also to much standardization freezes an approach too early, when there should be different approaches competing, with the more successful ones gaining terrain.

What can be done and what works: for a big software it is possible, that different teams work on different components. They can use different programming languages. In some cases they should actually do that, if they need specific strengths of one language in one area and of another one in another area. Of course a lot of things like interfaces, security, character encodings, networking and network architecture need to be standardized. While it may or may not be perfectly legitimate ot use for example both PostgreSQL and MongoDB and CassandraDB by the same application, there needs to be a good reason for using two different relational databases.

I have seen this happen, but at some point of time one team using programming language A was quite successful and team using programming language B not. And there was no strong specific benefit of language B for what they were doing, other that they knew programming language B better. Companies can replace teams or train people on language A. Or try to train them and replace some of them depending on the success of the transition. Possibly it is not the language, but the team that was the reason… Not necessarily the people, but for some reason they got on the wrong track and are unsuccessful this time… In the end, the project decided to standardize on language A and have this part rewritten by people who knew language A quite well. Even though this meant writing stuff twice, in the end it brought a much better result. This „rewriting something with better technology“ can be a dangerous trap. Often the second solution takes ten times longer to write until it does what the „inferior old solution“ did. But in this case that problem did not occur at all.

Now we should look at the IT work from a more abstract point of view.

Some work is done and in the end there is a result. This can be a source code, a running software, a server setup, a document, a database setup, a network setup,…

As a general rule, we want to standardize some aspects of the result. How the result is achieved should be left to the engineer.

So for many ad-hoc-tasks people can use different tools, like different editors and IDEs, different configurations for these, scripts in different scripting languages etc. Of course efficiency is a goal. Scripts can even be documented or checked into git for optional usage by those who like to use them. People can use Perl, Python, Ruby, Bash or something else for scripting. But the scripts can actually become results and then become candidates for standardization. For example, for managing a large number of servers, a lot can be done using ansible, which can be seen as a kind of fourth generation (declarative) language for doing exactly this. It makes sense to agree on using this all over the place and then the ansible playbook and not only the result of its execution is the result of the work and probably checked into git.

When working on Linux-servers, vi or most likely vim will be installed on every server. With the default configuration, at least for root and other system users. It is possible for an organization to agree to install a second editor on every server without too much effort. But it is usually not done and it does not really solve the issue, so most of the time editing on servers is standardized to usage of vi. Get used to it, it will be like that anyway in the next organization you are working for.

When someone writes a super complex script that nobody else understands, then this can be a combination of unnecessary complexity and inherent complexity. If the inherent complexity is high, then it is quite likely that this script will and should be reused and then it is no longer an ad-hoc solution but it becomes an result. Then it becomes of interest to make it a bit maintainable and readable, so others can work with it and improve it.

But in all levels there are often different ways to do something that are all legitimate and good and efficient for the purpose.

Share Button

Configuration Files

Just a short story as a starter:

I went to a Covid19 test and a colleague asked the very legitimate question, why I am doing a test in spite of being vaccinated twice.

This has a lot to do with configuration of software.

There are configuration files, that are the rules that local, regional, national or EU government impose. They typically prepare rules, publish them and activate them on a given date, but sometimes they become active immediately. The first case is a like using a more or less clean process for managing configurations and the second case is like adhoc editing with vi. Both have their place, but the second case should be understood as a rare exception that is needed to keep the system running and there should be steps to follow afterwards to document this change and to feed it into the process afterwards.

So the rules by governents confingure for example border controls and controls at airports. At each border and at each airport or port there are natural checkpoints where a person can be stopped. When travelling by train, by bicycle, by car or by motorcycle, there are usually only the border controls.

Now these configurations change freqently and there are a lot of complex rules. Like configuration files of software in real life.

So it can happen, that somewhere they ask for a test even if the person has benn vaccinated. Or in other situations the vaccination is enough. Or the proof of vaccination has to be given in a certain format, that is maybe still impossible to get. Or the vaccination only counts if it has been done using a certain vaccine. Now it is advisable to equip oneself in such a way that it is possible to pass absolutely all checkpoints of the whole trip.

Now there can be errors in the system. For example the persons who run a checkpoint might not know the rules valid at that particular day correctly and stop someone who actually complies with the rules.

It is said that it has been known 2000 years ago that the configuration of customs control have been falsified in an ad hoc manner. Not only by mistake, but actually voluntarily.

I think that this describes quite well the challenges and complexities of managing software configuration.

So almost every software has some configuration that comes with it.

This can be flat files, that are more or less specific to configuration.

It can be stored in a database. This can be a classical SQL database or a NoSQL-database or some kind of lightweight database that is accessed only via a library.

Configuration can be prepared in advance.

Or it is edited on the server.

Or it is edited using settings of the software via its GUI.

Now for a seriously managed software installation, it is a good idea to have well defined installation artifacts, which includes the configuration. But less technical configuration can actually be changed by sophisticated end users and this can be actually the right way to do it.

It is a good idea to keep some understanding of the configuration of the software. A way to do this is to have it in git, possibly using tags and or branches. I do not recommend to actually have git on all the machines directly, but rather to have some configuration master server on which the files are edited, prepared and pushed into git. Then they can be pushed to the servers by some mechanism, via tools like ansible or via simple scripts…

Share Button

Remote Work (15 months later)

After having worked mostly remotely for more than 15 months, some more experience has been gained on this issue than in April 2020.

It looks like conservative organizations still have a hard time getting used to this concept and instead insist that they are this rare special exception that absolutely needs to be in the office.

Usually privately owned companies are more innovative and small companies are more innovative. But I have observed the contrary… Some small companies are very conservative and really reject adopting technology and processes that allow to work remotely efficiently. And some relatively big, state owned companies manage to do this really well.

In the end it depends on the people. If the company does not bully away good people and gives them some freedom, they can organize good stuff, just for the love of technology (and of course they get paid). If a small company has a really good business idea and implemented this idea really well, it can afford to be a bit less efficient in some usually minor areas.

Now for Covid19 countries had to make different choices. One strategy was to shutdown as much as possible, to get Covid19 down to zero, which is what Australia, New Zealand and China tried. And which implies, if this strategy is retained, keeping the borders closed for several years in almost iron curtain style. Most countries have followed a strategy of just keeping the numbers at a level that can be dealt with, for example Sweden and Switzerland. And some countries have tried to get vaccinations very early and very fast… Now it was the question, if the people or the companies have stronger influence on the government. In Sweden and Switzerland companies where very strongly urged to let their people work remotely, whenever possible, but they never imposed restrictions the constrained basic human rights, like France in the first time of Covid or Germany recently. Anyway, responsible companies made remote work possible, where it is reasonable to at least reduce the density in the office and not to forget to reduce the density in the trains an public transport for the commute. For our American friends: In modern countries, a lot of people come to work by train, by bus and by bicycle. Which they do in the US as well in the Bay Area and in New York.

So now, what are the pros and cons of remote work? What can we learn from it?

After a year or so, people got used to working remotely. The technology works. Meetings work. So it has become a real option for many tasks.

Pros:

  • You do not have to spend time on the commute.
  • You can eat lunch at home
  • You do not get distracted by working in a loud office
  • You can have a nap after work
  • If remote work from abroad is allowed and you can afford it: you can work from really cool places

Cons:

  • When living alone most of us really miss meeting people after some time…
  • Some (actually not so many) meetings are more efficient when in person
  • You get a clean separation between work and free time when working in the office

So what are the right answers:

I think it is good to let people choose if and how much they work from home. The point is: if there is a task that can be done really better in the office, it is good to come to the office… If there is a task that can be done better from home, it is good to be able to just tell this to the team and do it from home. In the end of the day, some people will like to work 100% in the office, some will like to work almost 100% remotely and many will work some „mix“. Of course it is good to make sure the commute is not so long or at least not so unpleasant. Going by train allows to make use of the time much better than driving. Or commuting by bicycle has the side effect of being free exercise and for that reason being the only mode of transport that increases life expectancy. I am writing this blog post on my laptop in a train. My advice: If you like to work at home and are allowed to choose, I recommend going to the office a few times a month, like once a week or like a few adjacent days, depending on the distance.

A challenge will be: When nine people sit in the office and have a meeting and the tenth is participating from remote via a Jabra device, it requires a lot of discipline not to lose the remote guy. Those who have experienced it, know what I am talking about.

So in the end of the day: We can all profit from what we have learned from the pandemia and use our experience to apply remote work even when there is no virus and no government that forces this on us.

Share Button

Unit and Integration Test with Databases

From an ideological point of view, tests that involve the database may be unit tests are may be integration tests. For many applications the functional logic is quite trivial, so it is quite pointless to write „real unit tests“ that work without the database. Or it may be useful.

But there is a category of tests, that we should usually write, that involve the database and the database access layer and possibly the service layer on top of that. If layers are separated in that way. Or if not, they still test the service that is heavily relying on a database. I do not care too much if these tests are called unit tests or integration tests, as long as these terms are used consistently within the project or within the organization. If they are called integration tests, than achieving a good coverage of these integration tests is important.

Now some people promote using an in-memory-database for this. So in the beginning of the tests, this database is initialized and filled with empty tables and on top of that with some test data. This costs almost no time, because the DB is in memory.

While this is a valid way to go, I do not recommend it. First of all, different database products are always behaving different. And usually it is a bad idea to support more than one, because it significantly adds to the complexity of the software or degrades the performance. So there is this database product, that is used for production. For example PostgreSQL or Oracle or MS-SQL-Server or DB2. That is, what needs to work. That means the tests need to run against this database product, at least sometimes. This may be slow, so there is temptation to use H2 instead, but that means, writing tests in such a way that they support both db products. And enhancing the code to support at least to a minimal extent the H2 database as well. This is additional work and additional complexity to maintain.

What I recommend is: Use the same database product as for production, but tune it specifically for testing. For example: You can run it in a RAM disk. Or you can configure the database in such a way that transactions do not insist on the „D“ of „ACID“. This does not break functionality, but causes data loss in case of a crash, which is something we can live with when doing automated tests. PostgreSQL allows this optimization for sure, possibly others as well.
Now for setting up the database installation, possibly with all tables, we can use several techniques that allow to have a database instance with all table structures and some test data on demand. Just a few ideas, that might work:

  • Have a directory with everything in place and copy it into the ram disk and start it from there
  • Just have a few instances ready to use, dispose them after tests and already prepare them immediately after that for the next test
  • Have a docker image that is copied
  • Have a docker image that is immutable in terms of its disk image and use a RAM disk for all variable data. Much like Knoppix
  • Do it with VMWare or VirtualBox instead…

In the end of the day, there are many valid ways. And in some way or other, a few master images have to be maintained and all tests start from this master image. How it is done, if it is done on classical servers, on virtual servers, on a private cloud, on a public cloud or on the developers machine, does not really matter, if it just works efficiently.

If there is no existing efficient infrastructure, why not start with docker images that contain the database?

We need to leave the old traditional thinking that the database instance is expensive and that there are few of them on the server…

We can use the database product that is used for production environments even for development and testing and have separated and for testing fresh images. We should demand the right thing and work hard to get it instead of spending too much time on non-solutions that work only 80%. It can be done right. And laziness is a virtue and doing it right in this case is exactly serving this virtue.

Share Button

Electronic Vaccination Certificates

It is becoming increasingly important to have a way to easily prove being vaccinated against Covid-19 in a way that works at least throughout Europe.

This can be a piece of paper and it can be an app.

People are very concerned about faking the certificate.

This can happen and it is of course a serious crime.

The problem is that this is needed really fast and thinking about a typical software project it is very ambitious to be live in two months if it has not really started yet.

But leaving that timing issue aside, is it a reasonable idea?

Actually there are apps for railway tickets, boarding passes and of course for helping to prove ones identity when logging into some ebanking systems.

In Switzerland railroad tickets are either bought for a specific ride. Or people buy a „flat rate“ and can use any train within Switzerland as often as they want and they have already paid it with the annual payment. And there is a combination of both that makes the ticket cheaper for people who pay a much lower annual rate.

Now there is a mobile app and a chip card. Both provide a QR-code. This allows the trainmen to scan it with the app on their phone and to see the photo of the person and the person’s subscriptions. The actual ticket for guys without the flatrate is another QR-code. This has been in use for a few years now and seems to work quite well.

Of course it is based on the fact that railroad trainmen are a relatively small, trusted group that has access to this kind of information about travelers. That is another issue than having a terminal to scan the app at every cinema, restaurant etc. throughout Europe. The photo is convenience, but it could be provided by showing an ID. The QR-code could be signed with the private key of a health agency, whose public key is available to the public and allows to verify that it is correct. This way it could just provide all the information without using a server. The other variant would be that the QR-code is a key for a lookup on the server. But not depending on storing the important data on a server but having it all in the QR-code is actually a good idea. If it is desired to provide test results as well, then it might be a bit easier to provide access to a server and store all the information there. But since vaccinations and tests can be done in any country and it probably is not a reasonable or desirable idea to implement some mutual data exchange pattern between the servers and it is probably also not possible to tap the NSA-servers for this information, it will probably anyway end up having one QR-code for each event, where events are tests and vaccinations and maybe recoveries of the actual sickness from Covid19. In this case they could work without accessing the server.

The whole functionality is probably not too difficult to build, but these security related software projects tend to require working even more thorough than usual. Also it is probably important to look into the issue of data privacy, which might complicate issues a lot, especially due to the fact that anybody in the world can read the certificate if it is shown and by that access sensitive data of the user. And users have to show it every day on many occasions…

Share Button

Spline Approximation (Mathematics II)

This is the third of a series of article about spline approximation. If you have not done so, you should start reading

So a function which is supposed to approximate a given set of points \{(\xi_j, \ety_j)\}_{j=1\ldots N} as a linear combination

    \[(1)\thickspace g(x) = \sum_{i} a_i f_i(x)\]

of functions f_i, as described in the previous article with

    \[(2)\thickspace f(x)= \begin{cases} 0 &\text{for } x \le -2\\ (x+2)^3&\text{for } -2 < x \le -1\\ 1+3(x+1)+3(x+1)^2-3(x+1)^3&\text{for } -1 < x \le 0\\ 1+3(1-x)+3(1-x)^2-3(1-x)^3&\text{for } 0 < x \le 1\\ (2-x)^3&\text{for } 1 < x \le 2\\ 0&\text{for } x > 2 \end{cases}\]

and with h chosen such that x_i=x_0+i*h for all i

    \[(3)\thickspace f_i(x)=f(\frac{x-x_i}{h}) \text{ for } i=-1,\ldots,n+1\]

Now up to four f_is can overlap in one sub-interval. Assuming the subinterval is

    \[S=[x_i,x_{i+1}]\]

and the arithmetic mean of the borders is

    \[X = \frac{1}{2}(x_i+x_{i+1}) = x_i + \frac{h}{2} = x_{i+1} - \frac{h}{2}\]

This interval is touched by the base functions f_{i-1}, f_i, f_{i+1}, f_{i+2}, so the function g can be written for x\in S as

    \[g(x)=af_{i-1}(x)+bf_i(x)+cf_{i+1}(x)+df_{i+2}(x)\]

    \[= af\left(\frac{x-x_{i-1}}{h}\right)    +bf\left(\frac{x-x_{i}}{h}\right)    +cf\left(\frac{x-x_{i+1}}{h}\right)    +df\left(\frac{x-x_{i+2}}{h}\right)\]

    \[= a\left(2-\frac{x-x_{i-1}}{h}\right)^3\]

    \[+b\left(1+3\left(1-\frac{x-x_{i}}{h}\right)+3\left(1-\frac{x-x_{i}}{h}\right)^2-3\left(1-\frac{x-x_{i}}{h}\right)^3\right)\]

    \[+c\left(1+3\left(\frac{x-x_{i+1}}{h}+1\right)+3\left(\frac{x-x_{i+1}}{h}+1\right)^2-3\left(\frac{x-x_{i+1}}{h}+1\right)^3\right)\]

    \[+d\left(\frac{x-x_{i+2}}{h}+2\right)^3 \text{ using (5)}\]

Now we have

    \[x_{i-1}=X-\frac{h}{2}\]

    \[x_{i}=X+\frac{h}{2}\]

    \[x_{i+1}=X+\frac{3h}{2}\]

    \[x_{i+1}=X+\frac{5h}{2}\]

and thus

    \[g(x) =a\,\left(2-{{x+{{h}\over{2}}-X}\over{h}}\right)^3\]

    \[+b\,\left(-3\, \left(1-{{x-{{h}\over{2}}-X}\over{h}}\right)^3+3\,\left(1-{{x-{{h  }\over{2}}-X}\over{h}}\right)^2+3\,\left(1-{{x-{{h}\over{2}}-X  }\over{h}}\right)+1\right)\]

    \[+c\,\left(-3\,\left({{x-{{3}\over{2}} h-X }\over{h}}+1\right)^3  +3\,\left({{x-{{3}\over{2}} h-X}\over{h}}+1 \right)^2  +3\,\left({{x-{{3}\over{2}} h-X}\over{h}}+1\right)+1 \right)\]

    \[+d\,\left({{x-{{{5}\over{2}} h-X}\over{h}}+2\right)^3\]

Now let

    \[y=x-X\]

and hence

    \[x=y+X\]

and substitute that:

    \[g(x)=a\,\left(2-{{y+{{h}\over{2}}}\over{h}}\right)^3\]

    \[+b\,\left(-3\,  \left(1-{{y-{{h}\over{2}}}\over{h}}\right)^3+3\,\left(1-{{y-{{h  }\over{2}}}\over{h}}\right)^2+3\,\left(1-{{y-{{h}\over{2}}}\over{h}}  \right)+1\right)\]

    \[+c\,\left(-3\,\left({{y-{{3\,h}\over{2}}}\over{h}}+1  \right)^3+3\,\left({{y-{{3\,h}\over{2}}}\over{h}}+1\right)^2+3\,  \left({{y-{{3\,h}\over{2}}}\over{h}}+1\right)+1\right)\]

    \[+d\,\left({{y-  {{5\,h}\over{2}}}\over{h}}+2\right)^3\]

    \[={{d\,y^3}\over{h^3}} -{{3\,c\,y^3}\over{h^3}} +{{3\,b\,y^3}\over{h^ 3}} -{{a\,y^3}\over{h^3}}\]

    \[-{{3\,d\,y^2}\over{2\,h^2}} +{{15\,c\,y^2 }\over{2\,h^2}} -{{21\,b\,y^2}\over{2\,h^2}} +{{9\,a\,y^2}\over{2\,h^2 }}\]

    \[+{{3\,d\,y}\over{4\,h}} -{{9\,c\,y}\over{4\,h}} +{{33\,b\,y}\over{4 \,h}} -{{27\,a\,y}\over{4\,h}}\]

    \[-{{d}\over{8}} +{{5\,c}\over{8}} +{{17\,b }\over{8}} +{{27\,a}\over{8}}\]

    \[=\frac{d-3\,c+3\,b-a}{h^3}\,y^3 +\frac{-3\,d+15\,c-21\,b+9\,a}{2\,h^2}\,y^2\]

    \[+\frac{3\,d-9\,c+33\,b-27\,a}{4\,h}\,y +\frac{-d +5\,c + 17\,b + 27\,a}{8}\]

This will be used to actually calculate g in programs.

Disclaimer: I am not an expert in numerical analysis. While I believe that the approach that this article comes up with is sound and useful, I do believe that an expert of numerical analysis could still improve the accuracy of the calculations.

How to actually program it will be covered in the upcoming article „Spline Approximation (Cookbook)“. The link will be added, when it is available.

Share Button

Git for Linux System Engineering

Now that git has become the standard version control software, which is used by software developers.

Now for system engineering and system administration purposes it used to be an approach to just login and do something and remember it or even note it somewhere.

Some people tried to just use RCS or SCCS on the server to create a local repository for the version of a configuration file. Some thought might be needed, if this is OK, because in the processes of checking in a file, it might be temporarily unavailable or inconsistent, because RCS can modify files in the processes. Usually this is not a problem, but if these kinds of problems occur, they are kind of weird ot find and fix.

Today the approach has changed a lot. Virtualization and it has become possible to build up thousands of servers. Or even millions, if you go to companies like Google. Typically servers perform a specific task and another server is used for another specific task and since almost all are virtual, that is possible. Thus it become necessary to run and maintain thousands of servers and that means that things have to be done really efficiently.

An important approach is to automate a lot of things. This can be done for example using Ansible or Perl, Ruby or Python. Or combinations of these. Or there can be a cloud that supports a lot of features out of the box. And tools that automate certain tasks…

So in the end of the day, it is no longer the typical approach to login to a server and do certain things, but to work on some kind of master server, prepare things and then run them on multiple servers from there. So the scripts and configuration files are created on this master server and now it becomes perfectly natural to use git to check them in.

Another engineer can clone the repository and perform the same task on a new server, for example.

Also it is a good idea to create branches and work with the branch on a test server, until it looks good and them merge that to the main branch. Even tags and release numbers make sense in some environments.

Only one tiny difference: While software developers tend to use IDEs like IntelliJ IDEA, Eclipse, Visual Studio or editors like Atom or Emacs, system engineers tend to use vi. This has several advantages: First of all, it is installed on every Linux server (unless you create one and uninstall it to disprove this, of course), it runs well across an ssh session even with relatively slow internet connections and it is powerful once you learn how it works. So I recommend to learn just the basics to get simple stuff done. And I recommend to avoid heavy customizing of vi, because that might cause problems when working on a server without the customization.

And even though today work is done mostly on these master servers, of course it does still happen, that it becomes necessary to login to a specific server to check out if the stuff worked as desired or to do tiny tasks that have not been automated. But remember: Lazyness is good. You should hate doing repetitive work on multiple servers and figure out how to automate it, if it happens for the third time or if it can be anticipated that it will happen more often. Automation not only creates efficiency, it also allows for better collaboration (via git), better consistency and better stability. It is the way to go.

Share Button