How to create an Event-Driven Architecture with many external sources

I work for this Accounting company – as a Product Owner – that uses several external services (one for Contract management, another one for Accounting Demands management, a third one for Financial purposes and so on).

My IT Director wants us to build an Event-Driven Architecture to gather all the events that happen externally and bring them into our NoSQL in as-close-as-possible realtime.

The problem specifically is: how can I gather this data from external sources? What would be the best procedure for hundreds of events every day from different services? Consuming their APIs periodically?

Can you guys shine a light on how this can be properly done?

Go to Source
Author: Otto Neres

Integrating TeX into a Java desktop application

Looking to integrate TeX equations in a TeX-agnostic fashion, suitable for either ConTeXt or LaTeX, into a Java-based desktop Markdown editor. The possibilities are numerous, but I’m not sure what approach to take.

JMathTex outputs to MathML, which must be transformed. JEuclid can transform MathML to BufferedImages (not 100% sure). Neither are JDK 14-friendly and may be too slow to render in real-time. I haven’t looked because I read that they didn’t work with Java 9 (without porting effort), much less 14. Also, the licensing isn’t LGPL, which probably won’t work with the MIT licensing.

The NTS library is reported to compile gentle.tex in about 3 seconds. The spin-off project, εχTEX improves upon NTS, but I don’t know to what extent.

Java LaTeX Report requires a TeX engine.

Yet another possibility is to try JavaTex or rework the project using web2java to transpile WEB files into Java files.

JLaTeXMath does an exceptional job at rendering, though investigation would be required to see if it can perform real-time renders, output as SVG, or otherwise determine if it can integrate with FlyingSaucer.

The HTML preview panel uses FlyingSaucer. The SVG documents are rendered using SVG Salamander. If it was possible to go from TeX to SVG, that could work quite handily, architecturally. Some documentation stated FlyingSaucer can also render MathML, but I suspect the doc is wrong. FlyingSaucer doesn’t integrate with JavaScript. For that I’d have to add a JavaScript Engine.

If a JavaScript Engine is necessary, then using KaTeX may be an option.

Still another possibility is the TeX to MathML service. This is written in Java, but the source code is not available anywhere. I’ve reached out to some people involved in the project.

For native speeds, there may be a (cross-platform?) C-based TeX engine that can generate MathML or SVG from a TeX input. If so, it may be possible to integrate with the JNI.

Here’s a demo video of the app, to give you an idea of what is needed:

The goal is to produce real-time rendering of math, ideally in pure Java (no JavaScript).

What approach would you take and what do you see as its benefits and drawbacks?

Go to Source
Author: Dave Jarvis

How do no-code platforms such as Wix, Squarespace and Webflow work?

From an architectural point of view, how to platforms such as Wix, Squarespace and Webflow work when it comes to translating “drag and drop” templates into a fully functioning HTML website.

  • Are designs stored in raw HTML or in a native JSON format in the backend?
  • How do they get transpiled in real-time so quickly when presented to the user?

My initial thoughts are as follows:

  1. The user builds a web page that is transformed into a descriptive JSON file.
  2. The JSON file is uploaded to the NodeJS server, which builds the HTML, CSS and JS.
  3. HTML, CSS and built (minified/concatenated) JS is stored in the database.
  4. When the user visits the URL of the site in question, retrieve the site from the database and transform it to something vieweable by the browser.

Go to Source
Author: methuselah

How is design using C different from C++?

A employer is looking for C programmers, and I’m told they say that …

Good C design isn’t the same as good C++ design

… and so they’re looking for candidates experienced with C and not only C++.

How is the design of a large C system (hundreds of thousands or millions of lines of code) very different from that of C++?

Are the skills required of a developer very different, what differences should an experienced developer expect?

I’ve read Why are most Linux programs written in C? — including Linus’ little “two minute hate” at http://harmful.cat-v.org/software/c++/linus — but that doesn’t answer my question, which might be, “How is a well-designed C system unlike well-designed C++?” Or are they similar, and is Linus’ argument all there is to it?

I read Lakos’ Large-scale C++ Software Design — is there anything at all like that for C?


I’m trying to write this such that it isn’t a duplicate of:

Please assume I already know the differences between the langages.

I used C in the early 90s (before C++ became popular on PCs), and for writing device drivers on Windows (in the kernel where the C++ run-time library wasn’t supported), and I learned C++ incrementally as a superset of C.

IMO there’s obvious mapping between C and C++, like what’s written in one can be written in the other, for example:

  • C — a “file pointer” or “file handle”, plus an API of related functions which take a handle-or-pointer as a parameter, plus an underlying data structure (possibly hidden/encapsulated) which contains state associated with each handle
  • C++ — ditto except that “data structure” and “associated functions” and encapsulated in a class, as data members and methods

C++ has additional syntactic and type-checking sugar (e.g. templates and operator overloading), and its destructors allow RAII and reference-counting smart pointers, but apart from that …

And C has no first-class/language support for polymorphism, but e.g. a device driver on Windows is an installable plug-in, which has entry points which it exports, more or less like a vtable.

Go to Source
Author: ChrisW

Architecture of optimized sync of JSON data to a (cloud) server

I’m building an application that generates JSON data based on the user’s notes. The notes are rich data that contain task lists and other types of elements inside it, so I use JSON format to store it. It currently saves this data to the browser’s local storage using JSON.stringify.

Now I’m building a feature that saves this data to a cloud server, but since the nature of the notes is that they constantly change, sending a big stringified object to the server at every note update (even if it has a 5-second debounce) seems a bit crazy to me. Depending on the size of the note, I’d have some 300kb calls or more to the server for a simple typo correction. Not good.

So I’ve looked into some services that support rich data JSON storage, such as Google’s Cloud Firestore or Amazon’s DynamoDB. Both look promising, but I’m still trying to wrap my head around my architecture for this. Here’s my plan so far:

  1. I’ll read the JSON from the server and keep a local object.
  2. At every local change, I’ll analyze the difference down the object tree and make some sort of diff string
  3. I’ll send only this diff string so the server can analyze and update its record.

Well, that seems to work. I would send a few bytes requests to the server, and not 300kb. It would require some code to analyze and generate the diff, but ok so far.

Now the server part is where my challenge is. Both Cloud Firestore and DynamoDB have rich data change functions, but it seems that for both, these operations must be pre-established according to the data itself, in other words, you must know the dataset in advance in order to make operations on it. Since my notes’ JSON are a dynamic dataset, I have no way of writing custom updates and inserts on it, it must be dynamically calculated via some sort of diff mechanism.

My questions are:

  1. Is this a good way to approach this problem or is there something I could be doing that would make things better/easier?
  2. Are there any cloud services, NoSQL databases, query languages, or libraries that would allow for this type of optimized dynamic JSON document update architecture?
  3. Would this be a good use case for GraphQL?

Go to Source
Author: raphadko

Best archtitecture and methods for high performance computing that needs to scale

I have to make a decision regarding architecture and methods for the rewrite of a proof of concept application I wrote 10 years ago in c++….

It’s about high performance position calculation based on multi-trilateration.
Hunderts, thousands of IoT Sensors are sending it’s JSON based distance information to a host by using MQTT. From there the information needs to be processed.

My goal is to rewrite it, so it will get more real-time, scalable and run the position-solver-application in the cloud or on-premises with utilizing the cpu as efficient as possible by using all of the cores / threads.

If you start from scratch which architecture, language and methods would you use?
E.g.

GoLang ? C++ with threads? Rust? Python?
Architecture ?
Docker?
GPU support?

some metrics:
up to 10.000 sensors are sending distance 200 JSON messages per second to the MQTT Broker

(In my proof of concept there were just 20 sensors and 5 messages per second)

Any recommendation?

Will be a open-source project by the way.

Best regards,
//E

Go to Source
Author: Ersan

Leveraging the Process Context In Modular Monolith

I’m currently on the research if my current monolith architecture
can be improved by moving to micro-services.

From what I saw, there are a lot of good things about micro-services
but there is a low of caveats and pit-falls that might complex
the whole system (transactions, data-query, data-sync and duplication, fault tolerance, error handling, etc)

Seems that there is the middle ground of going into a “Modular Monolith” (1,2,3) where the system is designed vertically into a loosely coupled modules that interact using APIs.

The question is how the fact that these modules operate against the same database and sit in the same process space can be leveraged to ease the complexity, i.e:

  1. Can the modules declare “Transaction Context” in their API that will allow an ACID transaction when there is a cross-modules business logic operation? (in contrary to micro-service where it’s not achievable by design)
  2. Declaring database Views in special modules that will allow joining data on the database level and not on the application level.

If these cannot be leveraged – what’s the real difference between modular-monolith & micro-services (besides the independent scaling and deployment)

Go to Source
Author: sborpo

Junior Developer Looking for Mentorship or Advice!

I am a Junior Developer in my first year of professional work experience, I have joined a very small start-up company where I am one of the few developers. We have two junior devs and an outsourced developer that helps with senior dev roles when needed. I am often tasked with jobs and tasks that are more likely to be given to an intermediate/senior developer and am having to learn a lot as I progress through development cycles and make mistakes.

I do not really have access to a senior developer to help mentor me properly and I feel as though I might be learning bad practices as I do not have someone that looks over my code. I generally work well and can get these tasks done but sometimes have to ask the outsourced developer for help, even in the tasks I complete although I think they are performing well I feel as though I am just getting by and not actually understanding half as much as I think I am.

I am wondering if there are any good sites where I can hire a mentor for about an hour a week where I can discuss with them the issues I faced that week and how I fixed them, etc. I wouldn’t need technical assistance, mostly just advice and assurance on the techniques I am using.

Cheers!

(I am a front end dev trying to expand into a full stack, I do almost all of the front end work of the website and am trying to learn more about the backend as I go.)

Go to Source
Author: Tomas Brown

Architecture of Online Chess (2-player web-based board game)?

I’d like to try to create an application where 2 players can play chess online. The (possibly) novel feature would be that the process for joining a game would be similar to how Typeracer works. The first player creates a lobby then shares a link. The friend can then click the link to join the session immediately.

However, I’m a beginner when it comes to how information is shared over the internet. I’ve only created a few REST APIs and games in Java/Python so a lot of this would be new to me.

The first question I have is how/where would I actually implement the logic for this game? Would everything be done client-side (I’d probably use React so I’d then have a js library to hold all game rules) Or is everything done on the server?

Also, how would I keep the player’s game clients in sync? I’ve heard about websockets but any elaboration would be helpful.

Lastly, how could I go about implementing the feature where users can share their game link to get the other player to join?

For context, I have quite a bit of experience with React for UI, I’ve used NodeJS on the server, and I’m currently learning about how to create web APIs with ASP.NET web API (C#). I figure that I’ll be able to create the board UI in React and handle any animations with CSS. If that’s not a good idea let me know!

Overall I’m not really sure how I’ll tie this all together so a big picture view might help; and I’ve love to hear about any frameworks or tools that might make this job easier; thanks for the help in advance!

Go to Source
Author: Brad Stevanus

Is it a bad design to have 50K bindings on a single RabbitMQ queue?

Is it a bad design to have 50K bindings on a single RabbitMQ queue?

We are designing a new feature in our system where consumers (consumer == internal application) need to receive messages about changes in items they are interested in.
From the statistics we gathered we see the maximal number of items a single consumer can be interested in is 50K (on average it would be ~15K).
Initial tests shows that this is works OK and RabbitMQ handles it, but when we delete such a queue (if for example we scaling down the system and shutting down one of the instances) it takes a few minutes for it to be deleted and the RabbitMQ management portal becomes unresponsive.

Does it make sense to have so many bindings or is it a bad design?

  • We we’ll have around 50 instances of the consumers, each one with its own queue which is not persistent and should be auto deleted when the consumer shut down

Go to Source
Author: Tamir Dresher