Assembly language problem

I’ve disassembled a simple hello world program in GDB which will print hello world 10 times; and I’ve came across this line jmp 1156 <main+0x21> . What is this – <main+0x21> means? I understand it says jump to memory location 1156 but can’t figure out what that part means.

Go to Source
Author: Abhirup Bakshi

Memory address problem in GNU Debugger

enter image description here

I’m new to gbd. I’ve wrote a simple program which will print hello world 10 times(I’ve listed the program in the screenshot). Then I’ve displayed the assembly language and set the break point to main and after running the program untill main, I’ve displayed the content of the rip register.

Now the confusion is, in the assembly code, the memory addresses are 0000…and then some number, but in the rip register the addresses are 55555…and then some number. For example in the line 4, containing mov instruction, the adderss is 0x000000000000113d and in the rip register the address is 0x55555555513d. I’m assuming those two addresses are same based on the common ending (13d), although not sure. But the previous digits are different. Why?

Go to Source
Author: Abhirup Bakshi

In-memory pepper

As far as I understand, a problem with the idea of a pepper is that, if it’s stored as part of your code, then the hacker can read it if they can access your code.

So I was wondering, would it not be better to store the pepper in memory? I’m thinking of running the server in such a way that the pepper is made available to the server’s memory, without it being visible in the environment or in the process list or in the shell history. To obtain it, the hacker would need to run a memory debugger as the user that’s running the server, or as root. Maybe even run the process under something like RamCrypt, to encrypt its memory at runtime.

The scenario is that of running a server on a cloud Linux machine.

In that case, is there any better way of storing a pepper? Or is there something wrong with the in-memory idea?

Go to Source
Author: rid

Fortran subroutine NaN issue when multiple outputs is assigned to the same variable in the main program

I tend to use temporary variables to “ignore” some subroutine outputs. Since Fortran doesn’t have a command like tilde (~) in Matlab, I have to get the outputs fron the subroutine but I assign them to the same temporary variable with the right size. It has been my preference to make things look cleaner, no practical purposes.

So far for years I had no problems with it; today, I am getting NaN. I isolated the problem to a matrix inverse operation in the “r_calc” subroutine but I am not sure what is happening. If I replace that with a transpose (which actually gives the same result for this matrix), the problem disappears.

My question is, is it bad practive to assign multiple subroutine outputs to the same variable in the main code? Why am I having an issue with that when I use matrix inverse? I appreciate any help.

Below is a minimum working example:

program example
implicit none

real(kind=8) :: temp1(3,3)
real(kind=8) :: temp2(3,3)
real(kind=8) :: temp3(3,3)
real(kind=8) :: temp4(3,3)
real(kind=8) :: temp5(3,3)
real(kind=8) :: r_PN_U(3)

call r_calc(4.2d0, &
    0d0, 0d0, &
    0d0, 0d0, &
    0d0, 0d0, &
    0d0,      &
    0d0, 0d0, &
    0d0,      &
    0d0, 0d0, &
    0d0, 0d0, &
    r_PN_U, &
    temp1, temp1, temp1, temp1, temp1)

print *, r_PN_U ! gives NaN in mat_inv_3 is used in r_calc subroutine

call r_calc(4.2d0, &
    0d0, 0d0, &
    0d0, 0d0, &
    0d0, 0d0, &
    0d0,      &
    0d0, 0d0, &
    0d0,      &
    0d0, 0d0, &
    0d0, 0d0, &
    r_PN_U, &
    temp1, temp2, temp3, temp4, temp5)

print *, r_PN_U ! gives correct values even with mat_inv_3

end program example

subroutine r_calc(rin, &
e, ed, &
f, fd, &
v, vd, &
vp, &
w, wd, &
wp, &
theta, thetad, &
y_pos, z_pos, &
r_PN_U, &
C_DU_theta, C_DU_beta, C_DU_zeta, C_DU, C_UD)
implicit none

real(kind=8), intent(in) :: rin

real(kind=8), intent(in) :: e,     ed
real(kind=8), intent(in) :: f,     fd
real(kind=8), intent(in) :: v,     vd
real(kind=8), intent(in) :: vp
real(kind=8), intent(in) :: w,     wd
real(kind=8), intent(in) :: wp
real(kind=8), intent(in) :: theta, thetad

real(kind=8), intent(in) :: y_pos, z_pos

real(kind=8), intent(out) :: r_PN_U(3)

real(kind=8), intent(out) :: C_DU_theta(3,3)
real(kind=8), intent(out) :: C_DU_beta (3,3)
real(kind=8), intent(out) :: C_DU_zeta (3,3)
real(kind=8), intent(out) :: C_DU      (3,3)
real(kind=8), intent(out) :: C_UD      (3,3)

real(kind=8) :: beta, zeta

beta  = -atan(wp) ! [rad], flap down angle
zeta  =  atan(vp) ! [rad], lead angle

call angle2dcm(theta, beta, zeta, C_DU_theta, C_DU_beta, C_DU_zeta, C_DU)

call mat_inv_3(C_DU, C_UD) ! results in NaN in r_PN_U output
! C_UD = transpose(C_DU) ! gives the same result as inverse, eliminates the NaN issue

r_PN_U = [rin+e+f, v, w] + matmul(C_UD, [0d0, y_pos, z_pos])

end subroutine r_calc

subroutine angle2dcm(phi, theta, psi, C_phi, C_theta, C_psi, C_out)
implicit none

! Calculates the direction cosine matrix in psi - theta - phi (3 - 2 - 1) order
! Difference from "angle2dcm" subroutine is the extra outputs

real(kind=8), intent(in)  :: phi, theta, psi

real(kind=8), intent(out) :: C_psi(3,3), C_theta(3,3), C_phi(3,3), C_out(3,3)

C_phi(1,1:3) = [1d0,       0d0,      0d0]
C_phi(2,1:3) = [0d0,  cos(phi), sin(phi)]
C_phi(3,1:3) = [0d0, -sin(phi), cos(phi)]

C_theta(1,1:3) = [cos(theta), 0d0, -sin(theta)]
C_theta(2,1:3) = [       0d0, 1d0,         0d0]
C_theta(3,1:3) = [sin(theta), 0d0,  cos(theta)]

C_psi(1,1:3) = [ cos(psi),  sin(psi), 0d0]
C_psi(2,1:3) = [-sin(psi),  cos(psi), 0d0]
C_psi(3,1:3) = [      0d0,       0d0, 1d0]

C_out = matmul(C_phi, matmul(C_theta,C_psi)) ! psi - theta - phi (3 - 2 - 1) order

end subroutine angle2dcm

subroutine mat_inv_3(A, B)
implicit none

real(kind=8), intent(in)  :: A(3,3)
real(kind=8), intent(out) :: B(3,3)

real(kind=8) :: det

det = 1d0/(A(1,1)*A(2,2)*A(3,3) - A(1,1)*A(2,3)*A(3,2)&
  - A(1,2)*A(2,1)*A(3,3) + A(1,2)*A(2,3)*A(3,1)&
  + A(1,3)*A(2,1)*A(3,2) - A(1,3)*A(2,2)*A(3,1))

B(1,1) = +det * (A(2,2)*A(3,3) - A(2,3)*A(3,2))
B(2,1) = -det * (A(2,1)*A(3,3) - A(2,3)*A(3,1))
B(3,1) = +det * (A(2,1)*A(3,2) - A(2,2)*A(3,1))
B(1,2) = -det * (A(1,2)*A(3,3) - A(1,3)*A(3,2))
B(2,2) = +det * (A(1,1)*A(3,3) - A(1,3)*A(3,1))
B(3,2) = -det * (A(1,1)*A(3,2) - A(1,2)*A(3,1))
B(1,3) = +det * (A(1,2)*A(2,3) - A(1,3)*A(2,2))
B(2,3) = -det * (A(1,1)*A(2,3) - A(1,3)*A(2,1))
B(3,3) = +det * (A(1,1)*A(2,2) - A(1,2)*A(2,1))

end subroutine

Go to Source
Author: Seyhan Gul

What goes into a computer deciding how many memory locations to assign for specific data types in C?

I have learned file memory management and some very simple CPU assembly for manual memory manipulation, but I feel like there is a gap in my knowledge when it comes to modern, complex computers, OSs, and compilers. What I am wondering is what goes into the decision process to allocate a set amount of memory for different data types. On x86 systems its seems that 8 locations of byte-addressable memory are allocated for pointers consisting of 48-bit addresses. Is the system of allocation similar to that of Linux’s buddy system for files? Why 8 bytes instead of 6? Can it only split in half (limited to powers of 2) or is there a purposeful reason it goes for 8 bytes instead of 6?

I am wondering about the whole process. When you run a program and its program memory is loaded into memory alongside the compile-time set variables, I assume that the compiler has already previously decided based on the computer system how many memory locations to ask for for each variable data type. But how does it decide this?

Any resources you could point me towards would be helpful! Thanks!

Go to Source
Author: infinity8-room

C++ array takes input more than its size

I used to know that arrays in c++ doesn’t elements more than specified only except resizable array or using dynamic memory allocation. But in the simple code below if I put a value in the 3rd or 4th index it compiles and runs without error, when I put value in 5th index it compiles fine but gives a runtime error and for the 6th index compiles and runs fine and it seem to go on like this randomly.

Is it some concepts I didnt know or I did something wrong?

#include <iostream>
using namespace std;

int main(){

    int arr[2]={2,2};

    arr[0] = 1;
    arr[1] = 2;
    arr[2] = 3;
    arr[4] = 4;
    arr[5] = 5; //gives a runtime error
    arr[6] = 6;

}

Go to Source
Author: Abdullah Al Nahian

How to run out of memory in kernel

If we want to run out of memory in kernel by a program in user space, should we keeps msgsnd() to allocate memory in kernel? Or there is other way? Besides, user space process can not access kernel space memory, can kernel space process access user space memory? If kernel use up kernel space memory, will it use memory in user space?
Thanks

Go to Source
Author: David Lee

Memory Map pinning in Kernel process

From a hardware perspective, Servers have PCI memory management that assigns address translation across Global Memory on the board. 2 parts to this Q :

  1. Is there a permanent partition available that functions with GTK compiled kernels, transparent to the PCI multiplexer (really a bus arbitrator) The IBM P8 architecture has 2 CPU to memory direct bus ports. Of course P8 has its own Linux seat compilers, not sure how Source code translates mem addressing in Tensor to a GPU that indexes flat mem map to DRAM on Server. I don’t want to malloc
    or pin the DRAM.

  2. I believe boundaries can be set up to prevent heap / stack ingress into a central DRAM area.. This is on a 512GB Server.

  3. Is it possible to set each Xeon memory completely independent on an E5 2690 ; CPU I / CPU 2 each has independent 256GB address space. Thread allocation to RAM would be great , only how to also run from legacy Heap / Stack .. This is useful to avoid garbage collecting in the mem mapped area.

I am hardware design eng .. Is dev of non Mem leakage in Kernel / UniKernel avail ?

Go to Source
Author: Rus Talis