Segmentation fault using a structure c++ and binary files

I have the next code, which use file in binary and the use of structure.

#include<iostream>
#include<fstream>
using namespace std;
struct Student {
   int roll_no;
   string name;
};
int main() {
   ofstream wf("student.dat", ios::out | ios::binary);
   if(!wf) {
      cout << "Cannot open file!" << endl;
      return 1;
   }
   Student wstu[3];
   wstu[0].roll_no = 1;
   wstu[0].name = "Ram";
   wstu[1].roll_no = 2;
   wstu[1].name = "Shyam";
   wstu[2].roll_no = 3;
   wstu[2].name = "Madhu";
   for(int i = 0; i < 3; i++)
      wf.write((char *) &wstu[i], sizeof(Student));
   wf.close();
   if(!wf.good()) {
      cout << "Error occurred at writing time!" << endl;
      return 1;
   }
   ifstream rf("student.dat", ios::in | ios::binary);
   if(!rf) {
      cout << "Cannot open file!" << endl;
      return 1;
   }
   Student rstu[3];
   for(int i = 0; i < 3; i++)
      rf.read((char *) &rstu[i], sizeof(Student));
   
   if(!rf.good()) {
      cout << "Error occurred at reading time!" << endl;
      return 1;
   }
   rf.close();
   cout<<"Student's Details:"<<endl;
   for(int i=0; i < 3; i++) {
      cout << "Roll No: " << wstu[i].roll_no << endl;
      cout << "Name: " << wstu[i].name << endl;
      cout << endl;
   }
   
   return 0;
}

I am getting a segmentation fault, running using the debugging, I have the next output:

....
Roll No: 3
45        cout << "Name: " << wstu[i].name << endl;
(gdb) s
Name: Madhu
46        cout << endl;
(gdb) s

43     for(int i=0; i < 3; i++) {
(gdb) s
49     return 0;
(gdb) s
33     Student rstu[3];
(gdb) s
Student::~Student (this=0x7fffffffd700, __in_chrg=<optimized out>) at f9.cpp:4
4   struct Student {
(gdb) s

Program received signal SIGSEGV, Segmentation fault.
0x00007ffff72d3c71 in free () from /lib/x86_64-linux-gnu/libc.so.6

Why I am getting this segmentation?, it is seems that the problem is with the destructor of my structure, at the same time, the error is in the line 4. Why?

Go to Source
Author: Kintaro Oe

Assembly language problem

I’ve disassembled a simple hello world program in GDB which will print hello world 10 times; and I’ve came across this line jmp 1156 <main+0x21> . What is this – <main+0x21> means? I understand it says jump to memory location 1156 but can’t figure out what that part means.

Go to Source
Author: Abhirup Bakshi

how does dynamic linker resolve references in run time?

let’s say I have a source file dll.c that use dlopen and dlsym functions to load a shared library called F.so in run time.

dll.c has a reference to some_function(), and F.so has the definition of some_function().

and let’s say the picture below is the executable object prog which is obtained by

linux> gcc -rdynamic -o prog dll.c -ldl

enter image description here

so .text section contains the reference of some_function() that needs to be resolved when the porgram load F.so and start to call some_function()

My questions are:

Q1-it seems to me that the .text section(where some_function() belongs to) in RAM(executable is copied into memory) needs to be modified by the dynamic linker so that the reference of some_function() can be resolved, is my understanding correct?

Q2-If the dynamic linker needs to modify the .text section in RAM, how does it do it? from my understanding, .text section is read-only segment in RAM, how can a read-only segment be modified if it is called read-only?

Go to Source
Author: slowjams

How do I determine what methods are being implemented by another method?

I’m learning C# (VS 2019) after 20 years of coding ANSI-C. Let’s say some of the basics are still fuzzy.

In the book I’m using, the author introduced the override concept by overriding the virtual Object.ToString() method inside a user-generated class when implementing a call to Console.Writeline() when running the Main() program. That is, in the user class there is a public override string ToString() method which basically adds an overload to the .NET implementation. I get how all that works.

What I can’t find, is a way to determine that Console.Writeline() used the .ToString() method to do the work.

Is there a way to inspect .NET method code in some given class to find out if there are virtual methods being used I can exploit with an override in my own code? That is, how did the author know the .ToString() method was being used by ConsoleWriteline()?

Go to Source
Author: scottrod

Extracting all data from a list tag using HTML Agility Pack in C#

I am using the HTML Agility Pack to extract data. I want to extract all list items from the source:

<div id="feature-bullets" class="a-section a-spacing-medium a-spacing-top-small">

<ul class="a-unordered-list a-vertical a-spacing-mini">

<li><span class="a-list-item">
some data 1

</span></li>

<li><span class="a-list-item">
some data 2

</span></li>

<li><span class="a-list-item">
some data 3

</span></li>

<li><span class="a-list-item">
some data 4

</span></li>

</ul>

My code so far:

string source = someSource

var htmlDoc = new HtmlAgilityPack.HtmlDocument();
htmlDoc.LoadHtml(source);

How can I extract all the list items to get a result similar to this:

List value 1 is: some data 1
List value 2 is: some data 2
List value 3 is: some data 3
List value 4 is: some data 4

Go to Source
Author: Josie

Cannot locate libsdl2-2.o.so file

I’m following an SDL C++ tutorial and they advised me to copy that file (libsdl2-2.o.so) into my project file to remove any errors occurring if the code was on a different computer. I have installed both libsdl2-2.0 and libsdl2-dev onto my machine, but cannot find their files in /lib/x86_64-linux-gnu (like in the tutorial). Is it maybe stored in a different location now, or what?

Note: I can run code including the SDL header with no problem, so it does exist on my computer, somewhere…

I did try

locate libsdl2-2.o.so

but with no luck

Go to Source
Author: Netsu

How can I hide a flag from `strings` command

I want to create RE CTF, that the user needs to discover which string he need to write in order to execute a function that will print the flag, but, with a simple strings command in shell, we can discover the flag in the printf function. So, how can we make this not to happen?

#include <stdio.h>

void print_flag() {
    printf("secret_string discovered. flag: {eAsy_p3asy}");
}

int main()
{
    int c;
    c = getchar();
    while (c != 'secret_string') {
        putchar(c);
        c = getchar();
    }
    print_flag();
    return 0;
}

Go to Source
Author: ArlichBachman

Doing a validation check on a AJAX post and returning the error message

I have an AJAX post that does this

  $.ajax({
                    type: "POST",
                    url: "@MyWebSite.Url/myController/myView",
                    contentType: "application/json; charset=utf-8",
                    data:
                    JSON.stringify({ myModel: myData }),
                    dataType: "json",
                    traditional: true,
                    success: function () {
                        alert('Success!');
                    },
                    error: function () {
                        alert('Error! ');
                    }

My controller does the validation check but it is not correctly returning the error message.
This is what my controller looks like

 if (totalQty < part.QtyInItem)
                    {
                        //ModelState.AddModelError("", "There is " + part.QtyInItem + " of Part " + part.PartName + " used in item " + part.ItemID + " but you only handled " + totalQty + ". Please verify you are handling all parts used in the item.");
                        //RedirectToAction("myControler", myModel);
                        return this.Json(new { success = false, message = "There is " + part.QtyInItem + " of Part " + part.PartName + " used in item " + part.ItemID + " but you only handled " + totalQty + ". Please verify you are handling all parts used in the item." });
                    }

When I tried adding an error to the model state it just returned “ERROR!” and not the error message I had associated with it. And when I try doing the this.JSON return it returns “success” to the view and not the error message.

How can I do this validation check for my AJAX post

Go to Source
Author: ryan

How is design using C different from C++?

A employer is looking for C programmers, and I’m told they say that …

Good C design isn’t the same as good C++ design

… and so they’re looking for candidates experienced with C and not only C++.

How is the design of a large C system (hundreds of thousands or millions of lines of code) very different from that of C++?

Are the skills required of a developer very different, what differences should an experienced developer expect?

I’ve read Why are most Linux programs written in C? — including Linus’ little “two minute hate” at http://harmful.cat-v.org/software/c++/linus — but that doesn’t answer my question, which might be, “How is a well-designed C system unlike well-designed C++?” Or are they similar, and is Linus’ argument all there is to it?

I read Lakos’ Large-scale C++ Software Design — is there anything at all like that for C?


I’m trying to write this such that it isn’t a duplicate of:

Please assume I already know the differences between the langages.

I used C in the early 90s (before C++ became popular on PCs), and for writing device drivers on Windows (in the kernel where the C++ run-time library wasn’t supported), and I learned C++ incrementally as a superset of C.

IMO there’s obvious mapping between C and C++, like what’s written in one can be written in the other, for example:

  • C — a “file pointer” or “file handle”, plus an API of related functions which take a handle-or-pointer as a parameter, plus an underlying data structure (possibly hidden/encapsulated) which contains state associated with each handle
  • C++ — ditto except that “data structure” and “associated functions” and encapsulated in a class, as data members and methods

C++ has additional syntactic and type-checking sugar (e.g. templates and operator overloading), and its destructors allow RAII and reference-counting smart pointers, but apart from that …

And C has no first-class/language support for polymorphism, but e.g. a device driver on Windows is an installable plug-in, which has entry points which it exports, more or less like a vtable.

Go to Source
Author: ChrisW

Numerical integration of multidimensional integral [closed]

I’m looking for a software that allows to numerically integrate multidimensional integrals, hopefully compatible or written in C. I’ve found an answer for one dimensional integrals here: https://stackoverflow.com/questions/1564543/c-math-library-with-integration

Any suggestion or pointer? That hopefully supports parametric bounds, i.e., bounds that not necessarily are number but can also be expressions with the other integration variables involved. For example, for a two dimensional integral in x and y, the lower bound for the integration on y may be 1 – x.

Go to Source
Author: damianodamiano

What goes into a computer deciding how many memory locations to assign for specific data types in C?

I have learned file memory management and some very simple CPU assembly for manual memory manipulation, but I feel like there is a gap in my knowledge when it comes to modern, complex computers, OSs, and compilers. What I am wondering is what goes into the decision process to allocate a set amount of memory for different data types. On x86 systems its seems that 8 locations of byte-addressable memory are allocated for pointers consisting of 48-bit addresses. Is the system of allocation similar to that of Linux’s buddy system for files? Why 8 bytes instead of 6? Can it only split in half (limited to powers of 2) or is there a purposeful reason it goes for 8 bytes instead of 6?

I am wondering about the whole process. When you run a program and its program memory is loaded into memory alongside the compile-time set variables, I assume that the compiler has already previously decided based on the computer system how many memory locations to ask for for each variable data type. But how does it decide this?

Any resources you could point me towards would be helpful! Thanks!

Go to Source
Author: infinity8-room

C++ array takes input more than its size

I used to know that arrays in c++ doesn’t elements more than specified only except resizable array or using dynamic memory allocation. But in the simple code below if I put a value in the 3rd or 4th index it compiles and runs without error, when I put value in 5th index it compiles fine but gives a runtime error and for the 6th index compiles and runs fine and it seem to go on like this randomly.

Is it some concepts I didnt know or I did something wrong?

#include <iostream>
using namespace std;

int main(){

    int arr[2]={2,2};

    arr[0] = 1;
    arr[1] = 2;
    arr[2] = 3;
    arr[4] = 4;
    arr[5] = 5; //gives a runtime error
    arr[6] = 6;

}

Go to Source
Author: Abdullah Al Nahian

Managing the disposal of network connections

I am writing a class — let’s call it MessageSender — that needs to perform operations over the network. It basically does these things:

  1. Take some configuration
  2. establish a connection
  3. send stuff

If we ignore the cleanup of any resources, this would look like this:

var sender = new MessageSender("127.0.0.1");
sender.Connect();
sender.SendMessage("Hello world");

The thing I am unsure about is how to manage the disposal of the established connection. I thought of three options, of which I ended up implementing the last one.

(1) Having a dedicated Disconnect() method the user must call:

var sender = new MessageSender("127.0.0.1");
sender.Connect();
sender.SendMessage("Hello world");
sender.Disconnect();

(2) The MessageSender implements IDisposable:

using (var sender = new MessageSender("127.0.0.1"))
{
    sender.Connect();
    sender.SendMessage("Hello world");
}

(3) The Connect() method returns an IDisposable:

var sender = new MessageSender("127.0.0.1");
using (var connection = sender.Connect())
{
    sender.SendMessage("Hello world");
}

I have never seen the third option anywhere, but it does seem to have some advantages:

  • Construction, and hence configuration of the message sender is separated from establishing a connection. E.g. the object itself can be some other classes’ member, constructed and passed down as a dependency to others, while the execution and therefore the actual connection can be deferred to some Run() method.
  • The need for connection tear-down is a direct result from the connection set-up and cannot be (accidentally) separated.
  • If implemented that way, Connect() could be called multiple times.
  • Using IDisposable in general over a dedicated tear-down method gives you better language support, e.g. the using-clauses I used in both (2) and (3).

Potential pitfalls I see for all of the above solutions:

  • Failing to run the tear-down logik. This includes:
    • for (1): Forgetting to call Disconnect()
    • for (2) and (3): Forgetting to propery handle disposables
    • for (3): Ignoring the return value of Connect() entirely
  • MessageSender needs to keep track of its connection state to disallow multiple calls to Connect().
  • Calling SendMessage can fail at runtime depending on the current connection state.

Are there advantages to other approaches or disadvantages to my approach I am not aware of?

Here’s a somewhat simplified version of my actual code:

public sealed class MessageSender
{
    private readonly Some3rdPartyNetworkClient _client;
    private bool _connected = false;

    public MessageSender(string connectionString)
    {
        _client = new Some3rdPartyNetworkClient(connectionString);
    }

    public void SendMessage(string message)
    {
        if (!_connected) throw new InvalidOperationException("not connected.");
        _client.SendMessage(message);
    }

    private sealed class DelegateDisposer : IDisposable
    {
        private readonly Action _dispose;
        public DelegateDisposer(Action dispose) => _dispose = dispose;
        public void Dispose() => _dispose();
    }

    public IDisposable Connect()
    {
        if (_connected) throw new InvalidOperationException("Can only ever connect once.");
        _connected = true;
        
        _client.Connect();
        var tokenSource = new CancellationTokenSource();
        Task checkConnectivityWorker = CheckConnectivityWorker(tokenSource.Token);
        return new DelegateDisposer(() =>
        {
            tokenSource.Cancel();
            if (!checkConnectivityWorker.IsCanceled) checkConnectivityWorker.Wait();
            _client.Disconnect();
        });
    }

    private async Task CheckConnectivityWorker(CancellationToken cancellationToken)
    {
        // some stuff that needs to be done continuously while the connection is active
    }
}

Go to Source
Author: Felk

Load single module based on configuration using dependency injection

I’m working on an application that will run on multiple systems and may use different modules to communicate with external systems, but on each system, only one module will be used at a time. As it should be possible to change the used module on a specific system, the same application with all modules should be deployed to all systems. For simplicity, let’s assume that there are two modules called Foo and Bar.

Both modules have their own module descriptor that registers the module components to the dependency injection container:

public class FooModule : IModule
{
    public void Configure(IServiceCollection services)
    {
        services.AddTransient<IService, FooService>();
        // Register dependencies of FooService
    }
}

I know that Autofac supports modules out of the box (even with support for configuration) and there are several libraries that add such a feature to Microsoft.Extensions.DependencyInjection, but I want to ask this question with a general look at the concept of dependency injection.

If the services of all modules should be used at the same time, I would be done. Given they implement the same service, I could inject them using IEnumerable<IService>. But in my use case, there is a component that requires a single IService and I want to select the implementation based on a configuration (e.g. from a file).

Now there are several approaches where to apply that configuration and I’m not sure which one should be preferred:

1st approach – load single assembly

I could read the configuration and then load only the external assembly that contains the IModule that should be used. This would require the introduction of some “magic” link between the configuration values and the names of the module assemblies, as the core application should not know the extension modules beforehand.

2nd approach – call single module

All the assemblies are loaded, but using a link between the configuration values and the names of the module classes (or namespaces), only the one module that should be used will be called to setup the passed IServiceCollection.

3rd approach – let the module decide

The modules decide on their own if they are configured and therefor should provide their service implementation. Instead of evaluating the configuration in the core application, the configuration gets passed to the modules:

public class FooModule : IModule
{
    public void Configure(IServiceCollection services, IConfiguration configuration)
    {
        if (configuration.GetSection("foo").Exists())
        {
            services.AddTransient<IService, FooService>();
        }
    }
}

4th approach – use some DI container feature

I know that Autofac or other DI containers support named / keyed service registrations that would basically solve this problem and therefor answer this question for me. However, I guess there is a reason why other DI containers like Microsoft.Extensions.DependencyInjection do not provide this feature.


What approach would you prefer? Is there an approach or some variation that I missed? Is there anything else I should keep in mind when designing a system in that way?

Go to Source
Author: Lukas K├Ârfer