What naming convention should I use for a JSON RPC client API designed for multiple languages?

This is the documentation with the original RPC client API specification. The naming convention in the specification is camel case with the first letter in lower case.

Naming conventions might differ in subtle ways for different languages (camel case with vs. w/o capitalization of the first letter), but for some conventions like snake case (Python) or Swift’s Fluent Usage API changing the names in the original specification might increase the cognitive load when using the API for those already familiar with the specification.

When searching for different JSON RPC APIs on GitHub, some implementations seem to take advantage of reflection to intercept method calls and pass them to RPC request “as is” so method names for that language are the same as in the original spec. If reflection is not available the names are hardcoded and are mostly the same as the spec, changing only the capitalization of letters for some languages.

Some examples:

Not using Fluent Design in Swift

https://github.com/fanquake/CoreRPC/blob/master/Sources/CoreRPC/Blockchain.swift
https://github.com/brunophilipe/SwiftRPC/blob/master/SwiftRPC/SwiftRPC+Requests.swift

Not using snake case in Ruby

https://github.com/sinisterchipmunk/bitcoin-client/blob/master/lib/bitcoin-client/client.rb

Changing method names to camel case in C#

https://github.com/cryptean/bitcoinlib/blob/master/src/BitcoinLib/Services/RpcServices/RpcService/RpcService.cs

Go to Source
Author: rraallvv

How does poker analysis software read the cards from the poker room client? [closed]

I would like to write some poker software similar to PokerTracker and Holdem Manager that will give real time stats during a game of poker, these programs somehow read the current cards being played and player names from the poker client software, I assumed it was from reading the log files from the poker client but when I tried it seems the log files are not updated in realtime, only after each game has completed.

How do they do it?

Go to Source
Author: user2096512

Fortran subroutine NaN issue when multiple outputs is assigned to the same variable in the main program

I tend to use temporary variables to “ignore” some subroutine outputs. Since Fortran doesn’t have a command like tilde (~) in Matlab, I have to get the outputs fron the subroutine but I assign them to the same temporary variable with the right size. It has been my preference to make things look cleaner, no practical purposes.

So far for years I had no problems with it; today, I am getting NaN. I isolated the problem to a matrix inverse operation in the “r_calc” subroutine but I am not sure what is happening. If I replace that with a transpose (which actually gives the same result for this matrix), the problem disappears.

My question is, is it bad practive to assign multiple subroutine outputs to the same variable in the main code? Why am I having an issue with that when I use matrix inverse? I appreciate any help.

Below is a minimum working example:

program example
implicit none

real(kind=8) :: temp1(3,3)
real(kind=8) :: temp2(3,3)
real(kind=8) :: temp3(3,3)
real(kind=8) :: temp4(3,3)
real(kind=8) :: temp5(3,3)
real(kind=8) :: r_PN_U(3)

call r_calc(4.2d0, &
    0d0, 0d0, &
    0d0, 0d0, &
    0d0, 0d0, &
    0d0,      &
    0d0, 0d0, &
    0d0,      &
    0d0, 0d0, &
    0d0, 0d0, &
    r_PN_U, &
    temp1, temp1, temp1, temp1, temp1)

print *, r_PN_U ! gives NaN in mat_inv_3 is used in r_calc subroutine

call r_calc(4.2d0, &
    0d0, 0d0, &
    0d0, 0d0, &
    0d0, 0d0, &
    0d0,      &
    0d0, 0d0, &
    0d0,      &
    0d0, 0d0, &
    0d0, 0d0, &
    r_PN_U, &
    temp1, temp2, temp3, temp4, temp5)

print *, r_PN_U ! gives correct values even with mat_inv_3

end program example

subroutine r_calc(rin, &
e, ed, &
f, fd, &
v, vd, &
vp, &
w, wd, &
wp, &
theta, thetad, &
y_pos, z_pos, &
r_PN_U, &
C_DU_theta, C_DU_beta, C_DU_zeta, C_DU, C_UD)
implicit none

real(kind=8), intent(in) :: rin

real(kind=8), intent(in) :: e,     ed
real(kind=8), intent(in) :: f,     fd
real(kind=8), intent(in) :: v,     vd
real(kind=8), intent(in) :: vp
real(kind=8), intent(in) :: w,     wd
real(kind=8), intent(in) :: wp
real(kind=8), intent(in) :: theta, thetad

real(kind=8), intent(in) :: y_pos, z_pos

real(kind=8), intent(out) :: r_PN_U(3)

real(kind=8), intent(out) :: C_DU_theta(3,3)
real(kind=8), intent(out) :: C_DU_beta (3,3)
real(kind=8), intent(out) :: C_DU_zeta (3,3)
real(kind=8), intent(out) :: C_DU      (3,3)
real(kind=8), intent(out) :: C_UD      (3,3)

real(kind=8) :: beta, zeta

beta  = -atan(wp) ! [rad], flap down angle
zeta  =  atan(vp) ! [rad], lead angle

call angle2dcm(theta, beta, zeta, C_DU_theta, C_DU_beta, C_DU_zeta, C_DU)

call mat_inv_3(C_DU, C_UD) ! results in NaN in r_PN_U output
! C_UD = transpose(C_DU) ! gives the same result as inverse, eliminates the NaN issue

r_PN_U = [rin+e+f, v, w] + matmul(C_UD, [0d0, y_pos, z_pos])

end subroutine r_calc

subroutine angle2dcm(phi, theta, psi, C_phi, C_theta, C_psi, C_out)
implicit none

! Calculates the direction cosine matrix in psi - theta - phi (3 - 2 - 1) order
! Difference from "angle2dcm" subroutine is the extra outputs

real(kind=8), intent(in)  :: phi, theta, psi

real(kind=8), intent(out) :: C_psi(3,3), C_theta(3,3), C_phi(3,3), C_out(3,3)

C_phi(1,1:3) = [1d0,       0d0,      0d0]
C_phi(2,1:3) = [0d0,  cos(phi), sin(phi)]
C_phi(3,1:3) = [0d0, -sin(phi), cos(phi)]

C_theta(1,1:3) = [cos(theta), 0d0, -sin(theta)]
C_theta(2,1:3) = [       0d0, 1d0,         0d0]
C_theta(3,1:3) = [sin(theta), 0d0,  cos(theta)]

C_psi(1,1:3) = [ cos(psi),  sin(psi), 0d0]
C_psi(2,1:3) = [-sin(psi),  cos(psi), 0d0]
C_psi(3,1:3) = [      0d0,       0d0, 1d0]

C_out = matmul(C_phi, matmul(C_theta,C_psi)) ! psi - theta - phi (3 - 2 - 1) order

end subroutine angle2dcm

subroutine mat_inv_3(A, B)
implicit none

real(kind=8), intent(in)  :: A(3,3)
real(kind=8), intent(out) :: B(3,3)

real(kind=8) :: det

det = 1d0/(A(1,1)*A(2,2)*A(3,3) - A(1,1)*A(2,3)*A(3,2)&
  - A(1,2)*A(2,1)*A(3,3) + A(1,2)*A(2,3)*A(3,1)&
  + A(1,3)*A(2,1)*A(3,2) - A(1,3)*A(2,2)*A(3,1))

B(1,1) = +det * (A(2,2)*A(3,3) - A(2,3)*A(3,2))
B(2,1) = -det * (A(2,1)*A(3,3) - A(2,3)*A(3,1))
B(3,1) = +det * (A(2,1)*A(3,2) - A(2,2)*A(3,1))
B(1,2) = -det * (A(1,2)*A(3,3) - A(1,3)*A(3,2))
B(2,2) = +det * (A(1,1)*A(3,3) - A(1,3)*A(3,1))
B(3,2) = -det * (A(1,1)*A(3,2) - A(1,2)*A(3,1))
B(1,3) = +det * (A(1,2)*A(2,3) - A(1,3)*A(2,2))
B(2,3) = -det * (A(1,1)*A(2,3) - A(1,3)*A(2,1))
B(3,3) = +det * (A(1,1)*A(2,2) - A(1,2)*A(2,1))

end subroutine

Go to Source
Author: Seyhan Gul

Best archtitecture and methods for high performance computing that needs to scale

I have to make a decision regarding architecture and methods for the rewrite of a proof of concept application I wrote 10 years ago in c++….

It’s about high performance position calculation based on multi-trilateration.
Hunderts, thousands of IoT Sensors are sending it’s JSON based distance information to a host by using MQTT. From there the information needs to be processed.

My goal is to rewrite it, so it will get more real-time, scalable and run the position-solver-application in the cloud or on-premises with utilizing the cpu as efficient as possible by using all of the cores / threads.

If you start from scratch which architecture, language and methods would you use?
E.g.

GoLang ? C++ with threads? Rust? Python?
Architecture ?
Docker?
GPU support?

some metrics:
up to 10.000 sensors are sending distance 200 JSON messages per second to the MQTT Broker

(In my proof of concept there were just 20 sensors and 5 messages per second)

Any recommendation?

Will be a open-source project by the way.

Best regards,
//E

Go to Source
Author: Ersan

Time complexity for a small code

I’m trying to find the time complexity for the following code.

N= number of elements in array
D= a constant: D>1
V= a constant: V>1000

counter=1; //the maximum value of the counter is N/D.
for(i=0; i<N; i++)
{
    [OP1]   O1_Operation;        // O(1) operation.   [Total: N times]
    [OP2]   if(i%D!=0) continue; // O(1) operation.   [Total: N times]

    [OP3]   for(j=0;j<counter;j++) //                 [Total: {(N/D)*((N/D)+1)}/2 times] 
    [OP4]        for(s=0;s<V;s++)
    [OP5]            O1_Operation; // O(1) operation. [Total: (V*{(N/D)*((N/D)+1)}/2) times] 

    [OP6]   counter++;             // O(1) operation. [Total: N/D times]
 }

I added each operation time complexity and the total times it will be executed.
The confusion for me in this code is because of the mod operation.
This mod will allow only (N/D) operations to complete the code OP[3-6].

For [OP3] in the first time it will get execute 1 time, the second 2 times, …, N/D times. Hence, the total number of executions can be [(N/D) * ( (N/D)+1)] /2.
Removing D and V because they are constants will lead to a complexity of O(N^2) for the whole code.

Is this correct?

Go to Source
Author: Alice