Temporary features – Good practices

I work in a company where we do administrative tasks for other companies. With covid-19, there are many temporary laws to take into account.
My company did as it did before : simply creating the new field in the table and adapt the programs. I know it is bad practice because over time, we have table with 100+ columns, most of them being legacy. Many programs also have code that have been hastily writen, leading to the code becoming difficult to maintain.

Considering it requires new columns in the database tables, what would be the best practice to develop these features and then push them aside when we don’t need them anymore ?

Concerning the database, I was considering creating child tables that inherit from the tables I need.
Concerning the programs, I would create a new git branch where I implement this feature. If I then need new features/patch from the master branch, I would merge it from a copy of the master branch.

Is that a good idea or are there any better practices that have proven themselves ?

Go to Source
Author: Degravef

Correct way to implement a heatmap in the browser?

I have a list of AQI data for multiple stations in my country. I want to create a heatmap for the entire country using that data. The heatmap would be a gradient from green(low AQI) to red(high AQI). Any ideas how to implement this?. I am using react.js and was thinking of using mapbox-gl but I can’t find anything for this in the documentation.

A rough idea of what I am trying to implement:
Heatmap

Go to Source
Author: Dani

What are the pros and cons of using a reference/pointer vs an ID

I’m writing in C++, but this problem applies to any language without GC and even to languages with a GC as well.

I have a structure in memory in which I create/add objects. The structure takes ownership of those objects. I should never need to use an object after it’s removed from the structure.

When I first implemented this data structure, it seemed natural to use an ID/key/name/handle for the objects stored in it. I’m using it like this:

id1 = structure.addObj(new Square());
id2 = structure.addObj(new Square());
id3 = structure.addObj(new Circle());

obj3 = structure.getObj(id3);
obj3.addFriend( id1 );
obj3.addFriend( id2 );

idMax = structure.findObjWithMostFriends();
objMax = structure.getObj(idMax);
print(objMax.name);

After using it for a while, I’m thinking that it would be better to forget about the IDs and always use references to the objects instead. This way I wouldn’t need to pass a reference to the structure around every time.

I’m thinking about refactoring everything to only use references but I’m afraid of regretting it. I’d like to know more about what are the pros and cons of using IDs to decide whether I should proceed.

Memory details:

The objects are allocated on the heap and their address never changes.

The structure deallocates those objects when they’re removed (they could be released to the caller instead, but I don’t need this at the moment).

I’m not supposed to ever use objects that don’t belong to the structure. If my program is correct, I should never end up with a dangling ID or pointer. But it could happen if the program has bugs.

What are your experiences switching from IDs to references for similar problems? Which solution should I use?

Go to Source
Author: Helloer

Monolithic web application design

Assuming that I have some sort of enterprise accounting software system(web application).

I have many individual companies/customers that use this system.

So each company when they use the system. They use it to perform those financial related transactions. E.g. keep track of who they make payment to, keep track who are their creditors and debtors etc.

Design #1:

I as the owner of this software system, is the ‘super admin’ of this entire web application.

The web application is designed in such a way that I as the super admin can use the user interface on the web app to adjust the settings and configuration for each of my customer using my system.

An example of the page I see when I login as the super admin is below:

enter image description here

I as the super admin can go in and adjust the configuration for each company.

I as the super admin can also access another kind of control panel which I adjust some settings which applies to multiple or all my customer at once.

The system is designed in such a way that each company can have their own ‘administrator’ who can modify the settings that affect only their own company and their own users.

The database design is generally a ‘monolithic structure’. 1 database with more than 50 plus tables which of course caters to roles and privileges of the different users in the entire web application.

I can appreciate the certain control the super admin has over all the companies. There is some level of convenience as he/she can maniupate the entire web app based on whatever the customers tell him/her.

Design #2

My question is what happens if I design a similar kind of accounting software system. But the way I do it is, I create a database of each company and there is no super admin user interface for me to maniuplate my customer’s system.

If I have 10 customers. I have 10 database, then each database will have a less complicated database design.

Then I use version control to control the source code and database design for my 10 unique customers.


I am curious what is general pros and cons of this 2 designs.

Is there a difference on how the web application can be ‘distributed’ across multiple application server/database server for these 2 different design?

I ask this question because I have only a limited understanding of how a software system can be ‘divided across multiple servers’ to make sure the system performance is good.

Thank you.

Go to Source
Author: jin cheng teo

API / System Design [Flexible Authentication / Authorization]?

Background:

We’re a smaller shop that puts out a number of products that require
authentication and authorization. We’re currently using a 3rd party
service to “spin up new auth APIs” for each application. However, I
would like to investigate potential designs for doing this ourselves
in-house due to additional security requirements that may be needed in
the future. I’ve included information below and would appreciate any
design or support as I’m relatively fresh with designs of this scale.

Summary of goals:

An interface that allows a developer to create and manage
authentication APIs for multiple applications via a webpage. This
includes the mostly automated process of spinning up new auth APIs,
and ideally the ability to do some form of RBAC / ABAC changes via
this page.

Acceptable ‘Limitations’:

  1. All APIs can expose the same common-auth endpoints, so they’re basically just “Images” of one another. (I.e. Login / Register / etc.)

  2. While this would ideally be entirely automated, some parts I’ve thought of being manual are:
    * Domain configuration (pointing subdomains to new endpoints)
    * Spinning up additional VMs (needed?)
    * Spinning up additional DBs or tables?
    * Minor configuration changes
    * others I haven’t thought of?

User Stories:

  • As a Developer, I want to login to a web portal so that I can manage auth APIs.
  • As a Developer, I want to create a new API in the web UI, so that I can then integrate it to new applications.
  • As a Developer, I want to manage users in the web UI, so I can oversee access to our applications.
  • As a Developer, I want to **…

Future Considerations:

Each new API for the applications should likely have the user stores
(table containing user information) segregated into different
databases stored on separate hardware to minimize attack vectors and
improve security/scaling. For now, I’m thinking of different
subdomains or maybe request parameters to separate the APIs?

Thoughts:

I feel like there may be some solution that involves building a
template/image of an Auth API on Azure and just duplicating the VM or
image, but I’m not too sure of this route either. Obviously
management, maintenance, updates, etc. to these would be more
hands-on, but feel free to provide feedback on this as well.

Thanks in advance!

Go to Source
Author: ClicheCoffeeMug

DDD, service and localization best practices?

I’m building HTTP API with DDD principles. The end goal is that the server runs on some chosen language X but the clients can use what ever language the software supports. Service has state, but HTTP is stateless.

First without language:

storage = new MyStorage(params) // Can be anything from FS based to SQL, etc
service = new TODO(storage)
server = new build(service)
server.run()

build class is something like:

class build
  server: HTTPBase

  func build(service) 
    server.POST('/add', 
      func(req, resp)
        what = req.Get("param")
        res = service.DoSomething(what)
        if res == "Y"
          resp.Write("Everything went fine")
          return
        resp.Write("Oh noes!")
    )

    return server

Let’s add a language (the wrong way with global state):

defaultLanguage = "eng"
storage = new MyStorage(defaultLanguage, params) 
service = new TODO(defaultLanguage, storage)
server = new build(service) // Oh noes!
server.run()

class build
  service
  server: HTTPBase

  func getLanguageFromURL(req, resp)
    self.service.SetLanguage(req.GetFromURL('$language'))

  func build(service)
    self.service = service // Oh noes!

    server.BaseRoute('/$language')
    server.Middleware(self.getLanguageFromURL)

    server.POST('/add', // This is now /$language/add
      func(req, resp)
        what = req.Get("param")
        // Because we're setting service's language in the middleware 
        // this becomes a mess because some other user using some other 
        // language might just done something and results to client then 
        // getting the result in wrong language.
        // (= This changes service's global state, and explodes)
        res = self.service.DoSomething(what)
        if res == "Y"
          resp.Write(translate("Everything went fine"))
          return
        resp.Write(translate("Oh noes!"))
    )

    return server

So where to go?

Always spawn new service in middleware?

defaultLanguage = "eng"
server = new build(defaultLanguage)
server.run()

class build
  service
  defaultLanguage
  server: HTTPBase

  func serviceMW(req, resp)
    storage = new MyStorage(self.defaultLanguage, params) 
    service = new TODO(self.defaultLanguage, storage)
    service.setLanguage(req.GetFromURL('$language'))
    req.Context['service'] = service


  func build(defaultLanguage)
    self.defaultLanguage = defaultLanguage
    server.BaseRoute('/$language')
    server.Middleware(serviceMW)

    server.POST('/add', // This is now /$language/add
      func(req, resp)
        // Now we have one time only service which is destroyed after request is complete
        service = req.Context.Get('service')
        what = req.Get("param")
        
        res = service.DoSomething(what)
        if res.X == "Y"
          resp.Write(translate("Everything went fine"))
          return
        resp.Write(translate("Oh noes!"))
    )

    return server

This is nicely isolated but starting the service might be really slow and eat lot of resources depending what it does.

Clone/copy the service in middleware?

defaultLanguage = "eng"
storage = new MyStorage(defaultLanguage, params) 
service = new TODO(defaultLanguage, storage)
server = new build(service)
server.run()

class build
  service // main instance which gets cloned for each request
  server: HTTPBase

  func serviceMW(req, resp)
    // Create new instance
    service = self.service.clone() 
    service.setLanguage(req.GetFromURL('$language'))
    req.Context['service'] = service

  func build(service)
    // Create new instance
    self.service = service.clone()

    server.BaseRoute('/$language')
    server.Middleware(serviceMW)

    server.POST('/add', // This is now /$language/add
      func(req, resp)
        what = req.Get("param")

        // Cloned in middleware and destroyed after this request
        service = req.Context.Get('service')

        res = service.DoSomething(what)
        if res == "Y"
          resp.Write(translate("Everything went fine"))
          return
        resp.Write(translate("Oh noes!"))
    )

    return server

Adds isolation and removes possible startup slowdowns and resource hogs as they’re done beforehand.

Run a pool (array) of services for each language?

class build
  service[language] // service["eng"], service["fin"], ..

There’s a lot of languages so it increases resource consumption. There could be also a mechanism that spawns service for a specific language when it’s requested for the first time. Also service’s that have been idle for N hours/minutes could be removed.

Add language parameter to service’s and repository’s functions?

All Service.DoSomething(param1) becomes Service.DoSomething(language, param1).

Services/repositories never return only strings but some sort of translatable objects?

So you have something like:

result = service.DoSomething(param1)
actualResult = result.Translate(myChosenLanguage)

What is the DDD way of handling localization? Am I missing any options?

Go to Source
Author: raspi

How does poker analysis software read the cards from the poker room client? [closed]

I would like to write some poker software similar to PokerTracker and Holdem Manager that will give real time stats during a game of poker, these programs somehow read the current cards being played and player names from the poker client software, I assumed it was from reading the log files from the poker client but when I tried it seems the log files are not updated in realtime, only after each game has completed.

How do they do it?

Go to Source
Author: user2096512

How is design using C different from C++?

A employer is looking for C programmers, and I’m told they say that …

Good C design isn’t the same as good C++ design

… and so they’re looking for candidates experienced with C and not only C++.

How is the design of a large C system (hundreds of thousands or millions of lines of code) very different from that of C++?

Are the skills required of a developer very different, what differences should an experienced developer expect?

I’ve read Why are most Linux programs written in C? — including Linus’ little “two minute hate” at http://harmful.cat-v.org/software/c++/linus — but that doesn’t answer my question, which might be, “How is a well-designed C system unlike well-designed C++?” Or are they similar, and is Linus’ argument all there is to it?

I read Lakos’ Large-scale C++ Software Design — is there anything at all like that for C?


I’m trying to write this such that it isn’t a duplicate of:

Please assume I already know the differences between the langages.

I used C in the early 90s (before C++ became popular on PCs), and for writing device drivers on Windows (in the kernel where the C++ run-time library wasn’t supported), and I learned C++ incrementally as a superset of C.

IMO there’s obvious mapping between C and C++, like what’s written in one can be written in the other, for example:

  • C — a “file pointer” or “file handle”, plus an API of related functions which take a handle-or-pointer as a parameter, plus an underlying data structure (possibly hidden/encapsulated) which contains state associated with each handle
  • C++ — ditto except that “data structure” and “associated functions” and encapsulated in a class, as data members and methods

C++ has additional syntactic and type-checking sugar (e.g. templates and operator overloading), and its destructors allow RAII and reference-counting smart pointers, but apart from that …

And C has no first-class/language support for polymorphism, but e.g. a device driver on Windows is an installable plug-in, which has entry points which it exports, more or less like a vtable.

Go to Source
Author: ChrisW

Should child nodes have a reference to their parents?

Let’s say you are engineering a chat room software.

let client = new Client();
let room = rooms.FindRoom();
room.addClient(client);

This room (parent) now has a client (child).

client.on('message', (event) => {
   // With the above code, room must be found
   let room = rooms.FindClientsRoom(client);
   if(room){
       room.handleMessage(event);
   }
});

Or we have a child that knows about its parent

let client = new Client();
let room = rooms.FindRoom();
room.addClient(client);
client.setRoom(room);


client.on('message', (event) => {
   let room = client.getRoom();
   if(room){
       room.handleMessage(event);
   }
});

This is incredibly fast compared to looking for a client within 1000s of rooms. But is there something wrong with the design pattern? In any system, such as XML, do child nodes know about their parents? Should they?

Go to Source
Author: de-playable

Is Tailwind the better option?

Is Tailwind a good option when it comes to designing instead of CSS? And is it acceptable in the software development industry? I recently discovered Tailwind while coding in ReactJS. The format of coding using Tailwind is very different and more flexible. However I am not sure if it can be applicable in the software industry.

Go to Source
Author: Munira Tashrifwala

Architecture of optimized sync of JSON data to a (cloud) server

I’m building an application that generates JSON data based on the user’s notes. The notes are rich data that contain task lists and other types of elements inside it, so I use JSON format to store it. It currently saves this data to the browser’s local storage using JSON.stringify.

Now I’m building a feature that saves this data to a cloud server, but since the nature of the notes is that they constantly change, sending a big stringified object to the server at every note update (even if it has a 5-second debounce) seems a bit crazy to me. Depending on the size of the note, I’d have some 300kb calls or more to the server for a simple typo correction. Not good.

So I’ve looked into some services that support rich data JSON storage, such as Google’s Cloud Firestore or Amazon’s DynamoDB. Both look promising, but I’m still trying to wrap my head around my architecture for this. Here’s my plan so far:

  1. I’ll read the JSON from the server and keep a local object.
  2. At every local change, I’ll analyze the difference down the object tree and make some sort of diff string
  3. I’ll send only this diff string so the server can analyze and update its record.

Well, that seems to work. I would send a few bytes requests to the server, and not 300kb. It would require some code to analyze and generate the diff, but ok so far.

Now the server part is where my challenge is. Both Cloud Firestore and DynamoDB have rich data change functions, but it seems that for both, these operations must be pre-established according to the data itself, in other words, you must know the dataset in advance in order to make operations on it. Since my notes’ JSON are a dynamic dataset, I have no way of writing custom updates and inserts on it, it must be dynamically calculated via some sort of diff mechanism.

My questions are:

  1. Is this a good way to approach this problem or is there something I could be doing that would make things better/easier?
  2. Are there any cloud services, NoSQL databases, query languages, or libraries that would allow for this type of optimized dynamic JSON document update architecture?
  3. Would this be a good use case for GraphQL?

Go to Source
Author: raphadko

Leveraging the Process Context In Modular Monolith

I’m currently on the research if my current monolith architecture
can be improved by moving to micro-services.

From what I saw, there are a lot of good things about micro-services
but there is a low of caveats and pit-falls that might complex
the whole system (transactions, data-query, data-sync and duplication, fault tolerance, error handling, etc)

Seems that there is the middle ground of going into a “Modular Monolith” (1,2,3) where the system is designed vertically into a loosely coupled modules that interact using APIs.

The question is how the fact that these modules operate against the same database and sit in the same process space can be leveraged to ease the complexity, i.e:

  1. Can the modules declare “Transaction Context” in their API that will allow an ACID transaction when there is a cross-modules business logic operation? (in contrary to micro-service where it’s not achievable by design)
  2. Declaring database Views in special modules that will allow joining data on the database level and not on the application level.

If these cannot be leveraged – what’s the real difference between modular-monolith & micro-services (besides the independent scaling and deployment)

Go to Source
Author: sborpo

Which key-value database should I use and why?

Which key-value database should I use and why?

I have an application that scrapes webpages. Once a webpage is scraped, it has a unique identifier which is a string and it has the associated content which is also a string. I need to persist the webpage content, so that the next time I scrape the data for the same webpage I need to know whether the webpage has changed or not.

So either I might be storing the page identifier mapped with its checksum or simply its content. Anyhow, the basic need of storage is simple – a string to string map.

I think it is a use case for a key-value database. Since, my needs are very simple I don’t need a database full of fancy features. What could be a good open source key-value database that suits my simple needs?

Also, is it even necessary to use a database? I can fall back to a file storing the key-value. Will I lose performance if I use a database over a file?

Go to Source
Author: Navjot Singh