Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

There is no convenient interface for object pinning #265

Open
mgood7123 opened this issue Sep 25, 2023 · 36 comments
Open

There is no convenient interface for object pinning #265

mgood7123 opened this issue Sep 25, 2023 · 36 comments
Assignees
Labels
optional Will cause failures / of benefit. Worth assigning resources. question

Comments

@mgood7123
Copy link

mgood7123 commented Sep 25, 2023

it appears that MPS does not provide an interface for easy object pinning (for AMC (moving gc))

@mgood7123
Copy link
Author

mgood7123 commented Sep 25, 2023

to support pinning and unpinning multiple objects i implement the following

MPS states that a moving GC should not move ambigious pointers

do note that this is not gaurenteed to keep the object alive between the object allocation and the object being pinned

    // object pinning support
    // an array is used to keep track of pinned objects
    managed_obj_t* pinned;
    size_t pinned_used;
    size_t pinned_capacity;
    mps_root_t pinned_root;
  state->pinned_used = 0;
  state->pinned_capacity = 2;
  state->pinned = calloc(sizeof(managed_obj_t), state->pinned_capacity);
  if (state->pinned == NULL) managed_obj_error("Couldn't allocate pinned memory");

  res = mps_root_create_area_tagged(
      &state->pinned_root, state->arena, mps_rank_ambig(),
      (mps_rm_t)0, state->pinned, state->pinned+state->pinned_capacity,
      mps_scan_area_tagged, sizeof(mps_word_t) - 1, (mps_word_t)0
  );
  if (res != MPS_RES_OK) managed_obj_error("Couldn't create pinned root");
void managed_obj_pin(ManagedObjState * state, managed_obj_t obj) {
  int found = 0;
  for (int i = 0; i < state->pinned_capacity; i++) {
    if (state->pinned[i] == obj) {
      found = 1;
      break;
    }
  }
  // dont pin object that has already been pinned
  if (found == 1) return;
  if (state->pinned_used == state->pinned_capacity) {
    // reallocate
    int new_capacity = state->pinned_capacity * 2;
    managed_obj_t * new_pinned = calloc(sizeof(managed_obj_t), new_capacity);
    if (new_pinned == NULL) managed_obj_error("Couldn't allocate pinned memory");
    // copy old pinned to new pinned
    memcpy(new_pinned, state->pinned, sizeof(managed_obj_t)*state->pinned_capacity);
    mps_root_t new_pinned_root = NULL;
    mps_res_t res = mps_root_create_area_tagged(
        &new_pinned_root, state->arena, mps_rank_ambig(),
        (mps_rm_t)0, new_pinned, new_pinned+new_capacity,
        mps_scan_area_tagged, sizeof(mps_word_t) - 1, (mps_word_t)0
    );
    if (res != MPS_RES_OK) managed_obj_error("Couldn't create pinned root");
    // both are pinned, we can safely destroy old pin
    mps_root_destroy(state->pinned_root);
    state->pinned_root = new_pinned_root;
    free(state->pinned);
    state->pinned = new_pinned;
    state->pinned_capacity = new_capacity;
  }
  for (int i = 0; i < state->pinned_capacity; i++) {
    if (state->pinned[i] == NULL) {
      state->pinned[i] = obj;
      state->pinned_used++;
      break;
    }
  }
}

void managed_obj_unpin(ManagedObjState * state, managed_obj_t obj) {
  int found = 0;
  int index = 0;
  for (int i = 0; i < state->pinned_capacity; i++) {
    if (state->pinned[i] == obj) {
      found = 1;
      index = i;
      break;
    }
  }
  // dont unpin object that has not been pinned
  if (found == 0) return;
  state->pinned[index] = NULL;
  state->pinned_used--;
  memmove(state->pinned+index, state->pinned+index+1, sizeof(managed_obj_t)*(state->pinned_capacity-(index+1)));
  state->pinned[state->pinned_capacity-1] = NULL;
  // always keep at least 2 elements
  if (state->pinned_capacity > 2 && state->pinned_used == state->pinned_capacity / 2) {
    // reallocate
    int new_capacity = state->pinned_capacity / 2;
    managed_obj_t * new_pinned = malloc(sizeof(managed_obj_t)*new_capacity);
    if (new_pinned == NULL) managed_obj_error("Couldn't allocate pinned memory");
    // copy old pinned to new pinned
    memcpy(new_pinned, state->pinned, sizeof(managed_obj_t)*new_capacity);
    mps_root_t new_pinned_root = NULL;
    mps_res_t res = mps_root_create_area_tagged(
        &new_pinned_root, state->arena, mps_rank_ambig(),
        (mps_rm_t)0, new_pinned, new_pinned+new_capacity,
        mps_scan_area_tagged, sizeof(mps_word_t) - 1, (mps_word_t)0
    );
    if (res != MPS_RES_OK) managed_obj_error("Couldn't create pinned root");
    // both are pinned, we can safely destroy old pin
    mps_root_destroy(state->pinned_root);
    state->pinned_root = new_pinned_root;
    free(state->pinned);
    state->pinned = new_pinned;
    state->pinned_capacity = new_capacity;
  }
}

@rptb1
Copy link
Member

rptb1 commented Sep 25, 2023

Yes, to pin an object, store a reference to it in an ambiguous root.

If you have registered your threads as roots, then any object you can "see" directly from your running code (in a local variable, argument, etc.) is pinned.

May I ask why you are pinning objects? It would be very useful to know your requirement.

@rptb1 rptb1 self-assigned this Sep 25, 2023
@rptb1 rptb1 added the question label Sep 25, 2023
@mgood7123
Copy link
Author

mgood7123 commented Sep 26, 2023

Yes, to pin an object, store a reference to it in an ambiguous root.

If you have registered your threads as roots, then any object you can "see" directly from your running code (in a local variable, argument, etc.) is pinned.

In terms of a moving gc, does such prevent the gc from compacting memory ?

Tho we can observe objects being forwarded even though the thread is registered as an ambiguous root

yet pinning WILL NOT move an object even when explicitly garbage collection triggered, and pinning simply stores the object in an ambiguous root

May I ask why you are pinning objects? It would be very useful to know your requirement.

Completeness i guess, also pinning is useful when passing objects to code not managed by MPS (such as 3rd party libraries, or a runtime > native transition (eg, you would need to pin objects to prevent them from being moved during the execution of a native function that MPS does not know about))

@rptb1
Copy link
Member

rptb1 commented Sep 26, 2023

In terms of a moving gc, does such prevent the gc from compacting memory ?

Yes. If you're using the AMC pool class for example, the pinned objects don't get compacted. They will also reduce the performance. So it's best not to do it if you can avoid it.

Tho we can observe objects being forwarded even though the thread is registered as an ambiguous root

The program should not be able to observe this directly. If you e.g. print the addresses of objects, you might notice the changes, but the running program should not be able to observe it. If you can produce an example of this, it would be a bug, and we'd like to know about it.

A running program can observe movement if it somehow depends on the actual bit patterns of pointers, e.g. for hashing. In that case, the program must use a location dependency.

Completeness i guess, also pinning is useful when passing objects to code not managed by MPS (such as 3rd party libraries, or a runtime > native transition (eg, you would need to pin objects to prevent them from being moved during the execution of a native function that MPS does not know about))

If you call a third-party library from the MPS on a thread that is registered as a root then you do not need to pin objects, as long as that library uses the thread stack. This covers most cases.

This is a key ability of the MPS and is one of the main reasons for its existence. Dylan, and our commercial clients, want to be able to call foreign code naturally.

@mgood7123
Copy link
Author

Tho we can observe objects being forwarded even though the thread is registered as an ambiguous root

i forgot that i was using custom memory root instead of a registered thread

objects indeed do not move if we have a registered thread

The program should not be able to observe this directly.

it can if you supply a custom fwd function :)

If you call a third-party library from the MPS on a thread that is registered as a root then you do not need to pin objects, as long as that library uses the thread stack. This covers most cases.

true, tho if compacting memory is implemented (via an exact/semi-exact GC) then object pinning will be required

@mgood7123
Copy link
Author

mgood7123 commented Sep 26, 2023

also, if the hash table stores strong keys and strong values, is it still possible for such to be collected even if they still exist in the hash table ?

eg, if we only store the hash table in a root, and only store a key/value in the hash table and not in a root

root = table
table.store(new key, new value)

then the key and value can be collected unless they are pinned, regardless if the table uses a strong ap (rank exact) or a weak ap (rank weak)

even if we explicitly fix the keys/values they still get collected

      case MANAGED_OBJECT_TYPE_METADATA_HASH_TABLE: { \
        size_t i, length; \
        printf("scan keys %p\n", obj->metadata_hashtable.keys); \
        MANAGED_OBJECT_FIX(obj->metadata_hashtable.keys); \
        if (obj->metadata_hashtable.key_is_weak == 0) { \
            length = MANAGED_OBJECT_METADATA_UNTAG_COUNT(obj->metadata_hashtable.keys->length); \
            for (i = 0; i < length; ++i) { \
                MANAGED_OBJECT_FIX(obj->metadata_hashtable.keys->bucket[i]); \
            } \
        } \
        printf("scan values %p\n", obj->metadata_hashtable.values); \
        MANAGED_OBJECT_FIX(obj->metadata_hashtable.values); \
        if (obj->metadata_hashtable.value_is_weak == 0) { \
            length = MANAGED_OBJECT_METADATA_UNTAG_COUNT(obj->metadata_hashtable.values->length); \
            for (i = 0; i < length; ++i) { \
                MANAGED_OBJECT_FIX(obj->metadata_hashtable.values->bucket[i]); \
            } \
        } \
        base = (char *)base + MANAGED_OBJECT_ALIGN_OBJ(sizeof(managed_obj_metadata_hashtable_s)); \
        break; } \

so it seems like the AWL pool will ALWAYS allow its objects to be collected regardless if they are stored as an exact ref or a weak ref (storing such in AWL will never prevent it from being collected)

@rptb1
Copy link
Member

rptb1 commented Sep 26, 2023

true, tho if compacting memory is implemented (via an exact/semi-exact GC) then object pinning will be required

That's right. That's why the AMC pool class is our most used and best developed: it allows for compaction but copes well in the case of foreign code on the stack.

it can if you supply a custom fwd function :)

Sneaky!

also, if the hash table stores strong keys and strong values, is it still possible for such to be collected even if they still exist in the hash table ?

Objects are preserved by anything that is scanned. That includes roots, and other object in pools that may contain references -- i.e. they scan their objects. If your hash table is in such a pool, it will keep everything that it refers to alive.

The only except is if you allocate objects with weak references, which do not preserve the objects to which they refer.

@rptb1
Copy link
Member

rptb1 commented Sep 26, 2023

so it seems like the AWL pool will ALWAYS allow its objects to be collected regardless if they are stored as an exact ref or a weak ref (storing such in AWL will never prevent it from being collected)

This indicates that you may be allocating your exact table as weak by mistake.

@rptb1
Copy link
Member

rptb1 commented Sep 26, 2023

You need to create two allocation points in AWL: one weak and one exact. Allocate the exact table with the exact AP, etc. Otherwise the exact table will be scanned as weak, and its referents could die.

@rptb1
Copy link
Member

rptb1 commented Sep 28, 2023

There's nothing obviously wrong with your code. Is it possible that we can reproduce this ourselves? Is your project public? Can we run and debug it? If so, please could you file a new issue about non-weak objects dying with reproduction instructions. Thank you.

@mgood7123
Copy link
Author

mgood7123 commented Sep 29, 2023

the problem was i was storing a copy of the original pointer and assuming the GC would not need to move it when collecting

@mgood7123
Copy link
Author

so, both weak-key table, weak-value table, and weak table work as expected in regards to finalization

(my dumbass was logging inside

      if(MANAGED_OBJECT_TYPE(obj) == MANAGED_OBJECT_TYPE_EMPTY) {
        state->freed_obj_bytes += sizeof(managed_obj_empty_s);
        state->freed_aligned_obj_bytes += MANAGED_OBJECT_ALIGN_OBJ(sizeof(managed_obj_empty_s));
        printf("object %p is being finalized.\n", obj);

instead of

      printf("object %p is being finalized.\n", obj);

      if(MANAGED_OBJECT_TYPE(obj) == MANAGED_OBJECT_TYPE_EMPTY) {
        state->freed_obj_bytes += sizeof(managed_obj_empty_s);
        state->freed_aligned_obj_bytes += MANAGED_OBJECT_ALIGN_OBJ(sizeof(managed_obj_empty_s));

)

@mgood7123
Copy link
Author

mgood7123 commented Sep 30, 2023

anyway, back to the main issue, can this be done more efficiently ?

to support pinning and unpinning multiple objects i implement the following

MPS states that a moving GC should not move ambigious pointers

do note that this is not gaurenteed to keep the object alive between the object allocation and the object being pinned

    // object pinning support
    // an array is used to keep track of pinned objects
    managed_obj_t* pinned;
    size_t pinned_used;
    size_t pinned_capacity;
    mps_root_t pinned_root;
  state->pinned_used = 0;
  state->pinned_capacity = 2;
  state->pinned = calloc(sizeof(managed_obj_t), state->pinned_capacity);
  if (state->pinned == NULL) managed_obj_error("Couldn't allocate pinned memory");

  res = mps_root_create_area_tagged(
      &state->pinned_root, state->arena, mps_rank_ambig(),
      (mps_rm_t)0, state->pinned, state->pinned+state->pinned_capacity,
      mps_scan_area_tagged, sizeof(mps_word_t) - 1, (mps_word_t)0
  );
  if (res != MPS_RES_OK) managed_obj_error("Couldn't create pinned root");
void managed_obj_pin(ManagedObjState * state, managed_obj_t obj) {
  int found = 0;
  for (int i = 0; i < state->pinned_capacity; i++) {
    if (state->pinned[i] == obj) {
      found = 1;
      break;
    }
  }
  // dont pin object that has already been pinned
  if (found == 1) return;
  if (state->pinned_used == state->pinned_capacity) {
    // reallocate
    int new_capacity = state->pinned_capacity * 2;
    managed_obj_t * new_pinned = calloc(sizeof(managed_obj_t), new_capacity);
    if (new_pinned == NULL) managed_obj_error("Couldn't allocate pinned memory");
    // copy old pinned to new pinned
    memcpy(new_pinned, state->pinned, sizeof(managed_obj_t)*state->pinned_capacity);
    mps_root_t new_pinned_root = NULL;
    mps_res_t res = mps_root_create_area_tagged(
        &new_pinned_root, state->arena, mps_rank_ambig(),
        (mps_rm_t)0, new_pinned, new_pinned+new_capacity,
        mps_scan_area_tagged, sizeof(mps_word_t) - 1, (mps_word_t)0
    );
    if (res != MPS_RES_OK) managed_obj_error("Couldn't create pinned root");
    // both are pinned, we can safely destroy old pin
    mps_root_destroy(state->pinned_root);
    state->pinned_root = new_pinned_root;
    free(state->pinned);
    state->pinned = new_pinned;
    state->pinned_capacity = new_capacity;
  }
  for (int i = 0; i < state->pinned_capacity; i++) {
    if (state->pinned[i] == NULL) {
      state->pinned[i] = obj;
      state->pinned_used++;
      break;
    }
  }
}

void managed_obj_unpin(ManagedObjState * state, managed_obj_t obj) {
  int found = 0;
  int index = 0;
  for (int i = 0; i < state->pinned_capacity; i++) {
    if (state->pinned[i] == obj) {
      found = 1;
      index = i;
      break;
    }
  }
  // dont unpin object that has not been pinned
  if (found == 0) return;
  state->pinned[index] = NULL;
  state->pinned_used--;
  memmove(state->pinned+index, state->pinned+index+1, sizeof(managed_obj_t)*(state->pinned_capacity-(index+1)));
  state->pinned[state->pinned_capacity-1] = NULL;
  // always keep at least 2 elements
  if (state->pinned_capacity > 2 && state->pinned_used == state->pinned_capacity / 2) {
    // reallocate
    int new_capacity = state->pinned_capacity / 2;
    managed_obj_t * new_pinned = malloc(sizeof(managed_obj_t)*new_capacity);
    if (new_pinned == NULL) managed_obj_error("Couldn't allocate pinned memory");
    // copy old pinned to new pinned
    memcpy(new_pinned, state->pinned, sizeof(managed_obj_t)*new_capacity);
    mps_root_t new_pinned_root = NULL;
    mps_res_t res = mps_root_create_area_tagged(
        &new_pinned_root, state->arena, mps_rank_ambig(),
        (mps_rm_t)0, new_pinned, new_pinned+new_capacity,
        mps_scan_area_tagged, sizeof(mps_word_t) - 1, (mps_word_t)0
    );
    if (res != MPS_RES_OK) managed_obj_error("Couldn't create pinned root");
    // both are pinned, we can safely destroy old pin
    mps_root_destroy(state->pinned_root);
    state->pinned_root = new_pinned_root;
    free(state->pinned);
    state->pinned = new_pinned;
    state->pinned_capacity = new_capacity;
  }
}

@Ravenbot
Copy link
Member

Ravenbot commented Oct 1, 2023 via email

@mgood7123
Copy link
Author

mgood7123 commented Oct 2, 2023

true

if we allocate and pin 10,000 (10 thousand) objects then we end up with

a pinned capacity of 131072 bytes, in which 80000 bytes are used
an aligned allocation of 240000 bytes
a pool total of 253952 bytes (as reported by mps_pool_total_size(state->pool) - mps_pool_free_size(state->pool))
and an area commit of 1290240 bytes (of 32 MB reserved)

so, we have 240 KB aligned alloc, plus 80 KB pinned, for 10,000 objects

given that, here, 1 object is 4 bytes and 24 aligned bytes, and 1 pinned is 8 bytes

if we do the same with 200 thousand objects then we get

a pinned capacity of 2097152 bytes, in which 1600000 bytes are used
an aligned allocation of 4800000 bytes (with 800000 bytes unaligned allocation)
a pool total of 4816896 bytes (as reported by mps_pool_total_size(state->pool) - mps_pool_free_size(state->pool))
and an area commit of 22962176 bytes (of 32 MB reserved)

so, we have 4.8 MB aligned alloc (0.8 MB unaligned alloc), plus 1.6 MB pinned (2 MB pinned capacity), for 200,000 objects

and for 1 million (1,000,000) we get

a pinned capacity of 8388608 bytes, in which 8000000 bytes are used
an aligned allocation of 24000000 bytes (with 4000000 bytes unaligned allocation)
a pool total of 24027136 bytes (as reported by mps_pool_total_size(state->pool) - mps_pool_free_size(state->pool))
and an area commit of 90095616 bytes (of 167776256 bytes reserved)

so, we have 24 MB aligned alloc (4 MB unaligned alloc), plus 8 MB pinned (8.3 MB pinned capacity), for 1,000,000 objects

(with 1 KB = 1000 Bytes)

@mgood7123
Copy link
Author

in which allocating 1 million objects takes around 4 seconds

allocating 1 million objects
allocated 1 million objects in 4 seconds, 518 milliseconds, 728 microseconds

but PINNING 1 million objects takes

allocating and pinning 1 million objects
allocated and pinned 1 million objects in 1265 seconds, 168 milliseconds, 732 microseconds

@thejayps thejayps changed the title Object Pinning There is no convenient interface for object pinning Oct 2, 2023
@thejayps thejayps added the optional Will cause failures / of benefit. Worth assigning resources. label Oct 2, 2023
@thejayps thejayps self-assigned this Oct 2, 2023
@thejayps
Copy link
Contributor

thejayps commented Oct 2, 2023

@thejayps look over this

@rptb1
Copy link
Member

rptb1 commented Oct 2, 2023

in which allocating 1 million objects takes around 4 seconds

allocating 1 million objects
allocated 1 million objects in 4 seconds, 518 milliseconds, 728 microseconds

but PINNING 1 million objects takes

allocating and pinning 1 million objects
allocated and pinned 1 million objects in 1265 seconds, 168 milliseconds, 732 microseconds

Yes, allocation is very fast. It's done inline in only a few instructions.

I assume that in this pinning benchmark you're measuring your own code for adding an ambiguous root, since the MPS does not provide an interface. Is there a use case for doing that? If you were e.g. making FFI calls to an external library at a high rate (e.g. a graphics renderer) then you would not need to explicitly pin the objects if you're passing things on the stack. There is no cost to that.

If you know at allocation time that your objects need to not move, you could allocate them in a non-moving pool and there would be no need to pin them.

The MPS is designed with the assumption that the mutator will not need to explicitly pin and unpin large numbers of objects. If you have a reason to do it, we'd definitely like to hear about it.

@mgood7123
Copy link
Author

mgood7123 commented Oct 2, 2023

i tried using a hash table/map https://github.com/DavidLeeds/hashmap (modified) but thats even slower

allocating and pinning 1 million objects
rehashing from size 128 to size 256
rehashed 97 keys
rehashing from size 256 to size 512
rehashed 193 keys
rehashing from size 512 to size 1024
rehashed 385 keys
rehashing from size 1024 to size 2048
rehashed 769 keys
rehashing from size 2048 to size 4096
rehashed 1537 keys
rehashing from size 4096 to size 8192
rehashed 3073 keys
rehashing from size 8192 to size 16384
rehashed 6145 keys
rehashing from size 16384 to size 32768
rehashed 12289 keys
rehashing from size 32768 to size 65536
rehashed 24577 keys
rehashing from size 65536 to size 131072
rehashed 49153 keys
rehashing from size 131072 to size 262144
rehashed 98305 keys
rehashing from size 262144 to size 524288
rehashed 196609 keys
rehashing from size 524288 to size 1048576
rehashed 393217 keys
rehashing from size 1048576 to size 2097152
rehashed 786433 keys
allocated and pinned 1 million objects in 3270 seconds, 327 milliseconds, 219 microseconds
int pinned_map_compare_func(const union managed_obj_u * a, const union managed_obj_u * b) {
  return a == b ? 0 : 1;
}

size_t pinned_map_hash_func(const union managed_obj_u * data) {
  return managed_object_hashmap_hash_default(data, sizeof(data));
}

void pinned_map_on_realloc_func(void * user_data, void * new_pinned, size_t new_capacity) {
    ManagedObjState * state = (ManagedObjState*)user_data;
    mps_root_t new_pinned_root = NULL;
    if (new_capacity != 0) {
      mps_res_t res = mps_root_create_area_tagged(
          &new_pinned_root, state->arena, mps_rank_ambig(),
          (mps_rm_t)0, new_pinned, new_pinned+new_capacity,
          mps_scan_area_tagged, sizeof(mps_word_t) - 1, (mps_word_t)0
      );
      if (res != MPS_RES_OK) managed_obj_error("Couldn't create pinned root");
    }
    // both are pinned, we can safely destroy old pin
    if (state->pinned_root != NULL)
      mps_root_destroy(state->pinned_root);
    state->pinned_root = new_pinned_root;
}

void managed_obj_pin(ManagedObjState * state, managed_obj_t obj) {
  if (state->pinned.map_base.table_size == 0) {
    managed_object_hashmap_init(&state->pinned, pinned_map_hash_func, pinned_map_compare_func);
    managed_object_hashmap_set_on_realloc_func(&state->pinned, pinned_map_on_realloc_func);
  }
  /* Insert a my_value (fails and returns -EEXIST if the key already exists) */
  int result = managed_object_hashmap_put(&state->pinned, state, obj, obj);
  if (result == -EEXIST) return;
}

void managed_obj_unpin(ManagedObjState * state, managed_obj_t obj) {
  if (state->pinned.map_base.table_size != 0) {
    if (managed_object_hashmap_remove(&state->pinned, obj) != NULL) {
      if (state->pinned.map_base.size == 0) {
        managed_object_hashmap_cleanup(&state->pinned, state);
      }
    }
  }
}

@mgood7123
Copy link
Author

my use case would be a compacting garbage collected language with C interop support

currently i intend to transpile down to C/C++ and fall back to interpreted execution if a compiler is unavailable

@mgood7123
Copy link
Author

mgood7123 commented Oct 2, 2023

basically i seem to have 6 ... design/implementation goals for object pinning

  1. assign an object into ambig root to prevent gc from moving said object
    1. protect against double add/remove (adding the same object should be noop if it already exists, and removing an object twice is idiomatic)
      however we might need to do recursive based depending on api usage, not sure
    1. we must keep track of which memory slots in the ambig root are used/free in order to recycle unused slots
  1. pinning an object should be fast regardless of how many objects are currently pinned
  2. unpinning an object should be as fast as pinning an object
  3. objects must be pinnable and unpinnable in undetermined order

i tried

void managed_obj_pin(ManagedObjState * state, managed_obj_t obj) {
  if (obj->empty.pinned_root == NULL) {
      mps_res_t res = mps_root_create_area_tagged(
          &obj->empty.pinned_root, state->arena, mps_rank_ambig(),
          (mps_rm_t)0, &obj->empty.pinned, (&obj->empty.pinned)+sizeof(managed_obj_t),
          mps_scan_area_tagged, sizeof(mps_word_t) - 1, (mps_word_t)0
      );
      if (res != MPS_RES_OK) managed_obj_error("Couldn't create pinned root");
      obj->empty.pinned = obj;
      state->pinned_used++;
    }
}

void managed_obj_unpin(ManagedObjState * state, managed_obj_t obj) {
  if (obj->empty.pinned_root != NULL) {
    obj->empty.pinned = NULL;
    mps_root_destroy(obj->empty.pinned_root);
    obj->empty.pinned_root = NULL;
    state->pinned_used--;
  }
}

but it seems to page fault while scanning during create root which leads to double lock

@mgood7123
Copy link
Author

the only way i can see doing this effiecently would be to directly integrate it into the gc amc pool class as a queriable flag

@mgood7123
Copy link
Author

mgood7123 commented Oct 3, 2023

ok i implemented object pinning directly into the GC and now it is a lot faster

allocating 1 million objects
allocated 1 million objects in 4 seconds, 35 milliseconds, 502 microseconds
allocating and pinning 1 million objects
allocated and pinned 1 million objects in 60 seconds, 229 milliseconds, 457 microseconds
diff --git a/code/config.h b/code/config.h
index 02d335e..5561bec 100644
--- a/code/config.h
+++ b/code/config.h
@@ -353,6 +353,7 @@
 #define FMT_SKIP_DEFAULT (&FormatNoSkip)
 #define FMT_FWD_DEFAULT (&FormatNoMove)
 #define FMT_ISFWD_DEFAULT (&FormatNoIsMoved)
+#define FMT_ISPINNED_DEFAULT (&FormatNoIsPinned)
 #define FMT_PAD_DEFAULT (&FormatNoPad)
 #define FMT_CLASS_DEFAULT (&FormatDefaultClass)
 
diff --git a/code/fmtdy.c b/code/fmtdy.c
index 6cfbca6..d43763a 100644
--- a/code/fmtdy.c
+++ b/code/fmtdy.c
@@ -771,6 +771,7 @@ static struct mps_fmt_A_s dylan_fmt_A_s =
   dylan_copy,
   dylan_fwd,
   dylan_isfwd,
+  no_ispinned,
   dylan_pad
 };
 
@@ -782,6 +783,7 @@ static struct mps_fmt_B_s dylan_fmt_B_s =
   dylan_copy,
   dylan_fwd,
   dylan_isfwd,
+  no_ispinned,
   dylan_pad,
   dylan_class
 };
@@ -816,6 +818,7 @@ static struct mps_fmt_A_s dylan_fmt_A_weak_s =
   no_copy,
   no_fwd,
   no_isfwd,
+  no_ispinned,
   no_pad
 };
 
@@ -827,6 +830,7 @@ static struct mps_fmt_B_s dylan_fmt_B_weak_s =
   no_copy,
   no_fwd,
   no_isfwd,
+  no_ispinned,
   no_pad,
   dylan_class
 };
diff --git a/code/fmthe.c b/code/fmthe.c
index 76f7239..0dfd422 100644
--- a/code/fmthe.c
+++ b/code/fmthe.c
@@ -138,6 +138,7 @@ static struct mps_fmt_auto_header_s HeaderFormat =
   dylan_header_skip,
   NULL, /* later overwritten by dylan format forward method */
   dylan_header_isfwd,
+  no_ispinned,
   dylan_header_pad,
   (size_t)headerSIZE
 };
@@ -152,6 +153,7 @@ static struct mps_fmt_auto_header_s HeaderWeakFormat =
   dylan_header_skip,
   no_fwd,
   no_isfwd,
+  no_ispinned,
   no_pad,
   (size_t)headerSIZE
 };
diff --git a/code/fmtno.c b/code/fmtno.c
index 4854e55..a655d06 100644
--- a/code/fmtno.c
+++ b/code/fmtno.c
@@ -59,6 +59,13 @@ mps_addr_t no_isfwd(mps_addr_t object)
     return 0;
 }
 
+mps_bool_t no_ispinned(mps_addr_t object)
+{
+    unused(object);
+    notreached();
+    return FALSE;
+}
+
 void no_pad(mps_addr_t addr,
             size_t size)
 {
@@ -83,6 +90,7 @@ static struct mps_fmt_A_s no_fmt_A_s =
     no_copy,
     no_fwd,
     no_isfwd,
+    no_ispinned,
     no_pad
 };
 
@@ -94,6 +102,7 @@ static struct mps_fmt_B_s no_fmt_B_s =
     no_copy,
     no_fwd,
     no_isfwd,
+    no_ispinned,
     no_pad,
     no_class
 };
diff --git a/code/fmtno.h b/code/fmtno.h
index c7afc1d..6e8dcd5 100644
--- a/code/fmtno.h
+++ b/code/fmtno.h
@@ -14,6 +14,7 @@ extern mps_addr_t no_skip(mps_addr_t);
 extern void no_copy(mps_addr_t, mps_addr_t);
 extern void no_fwd(mps_addr_t, mps_addr_t);
 extern mps_addr_t no_isfwd(mps_addr_t);
+extern mps_bool_t no_ispinned(mps_addr_t);
 extern void no_pad(mps_addr_t, size_t);
 extern mps_addr_t no_class(mps_addr_t);
 
diff --git a/code/format.c b/code/format.c
index 8d51357..a9515a4 100644
--- a/code/format.c
+++ b/code/format.c
@@ -31,6 +31,7 @@ Bool FormatCheck(Format format)
   CHECKL(FUNCHECK(format->skip));
   CHECKL(FUNCHECK(format->move));
   CHECKL(FUNCHECK(format->isMoved));
+  CHECKL(FUNCHECK(format->isPinned));
   CHECKL(FUNCHECK(format->pad));
   CHECKL(FUNCHECK(format->klass));
 
@@ -70,6 +71,13 @@ static mps_addr_t FormatNoIsMoved(mps_addr_t object)
     return NULL;
 }
 
+static mps_bool_t FormatNoIsPinned(mps_addr_t object)
+{
+    UNUSED(object);
+    NOTREACHED;
+    return FALSE;
+}
+
 static void FormatNoPad(mps_addr_t addr, size_t size)
 {
     UNUSED(addr);
@@ -92,6 +100,7 @@ ARG_DEFINE_KEY(FMT_SCAN, Fun);
 ARG_DEFINE_KEY(FMT_SKIP, Fun);
 ARG_DEFINE_KEY(FMT_FWD, Fun);
 ARG_DEFINE_KEY(FMT_ISFWD, Fun);
+ARG_DEFINE_KEY(FMT_ISPINNED, Fun);
 ARG_DEFINE_KEY(FMT_PAD, Fun);
 ARG_DEFINE_KEY(FMT_HEADER_SIZE, Size);
 ARG_DEFINE_KEY(FMT_CLASS, Fun);
@@ -108,6 +117,7 @@ Res FormatCreate(Format *formatReturn, Arena arena, ArgList args)
   mps_fmt_skip_t fmtSkip = FMT_SKIP_DEFAULT;
   mps_fmt_fwd_t fmtFwd = FMT_FWD_DEFAULT;
   mps_fmt_isfwd_t fmtIsfwd = FMT_ISFWD_DEFAULT;
+  mps_fmt_ispinned_t fmtIspinned = FMT_ISPINNED_DEFAULT;
   mps_fmt_pad_t fmtPad = FMT_PAD_DEFAULT;
   mps_fmt_class_t fmtClass = FMT_CLASS_DEFAULT;
 
@@ -127,6 +137,8 @@ Res FormatCreate(Format *formatReturn, Arena arena, ArgList args)
     fmtFwd = arg.val.fmt_fwd;
   if (ArgPick(&arg, args, MPS_KEY_FMT_ISFWD))
     fmtIsfwd = arg.val.fmt_isfwd;
+  if (ArgPick(&arg, args, MPS_KEY_FMT_ISPINNED))
+    fmtIspinned = arg.val.fmt_ispinned;
   if (ArgPick(&arg, args, MPS_KEY_FMT_PAD))
     fmtPad = arg.val.fmt_pad;
   if (ArgPick(&arg, args, MPS_KEY_FMT_CLASS))
@@ -146,6 +158,7 @@ Res FormatCreate(Format *formatReturn, Arena arena, ArgList args)
   format->skip = fmtSkip;
   format->move = fmtFwd;
   format->isMoved = fmtIsfwd;
+  format->isPinned = fmtIspinned;
   format->pad = fmtPad;
   format->klass = fmtClass;
 
@@ -206,6 +219,7 @@ Res FormatDescribe(Format format, mps_lib_FILE *stream, Count depth)
                "  skip $F\n", (WriteFF)format->skip,
                "  move $F\n", (WriteFF)format->move,
                "  isMoved $F\n", (WriteFF)format->isMoved,
+               "  isPinned $F\n", (WriteFF)format->isPinned,
                "  pad $F\n", (WriteFF)format->pad,
                "  headerSize $W\n", (WriteFW)format->headerSize,
                "} Format $P ($U)\n", (WriteFP)format, (WriteFU)format->serial,
diff --git a/code/mpmst.h b/code/mpmst.h
index f5ba00b..4da18f0 100644
--- a/code/mpmst.h
+++ b/code/mpmst.h
@@ -361,6 +361,7 @@ typedef struct mps_fmt_s {
   mps_fmt_skip_t skip;
   mps_fmt_fwd_t move;
   mps_fmt_isfwd_t isMoved;
+  mps_fmt_ispinned_t isPinned;
   mps_fmt_pad_t pad;
   mps_fmt_class_t klass;        /* pointer indicating class */
   Size headerSize;              /* size of header */
diff --git a/code/mps.h b/code/mps.h
index 700c414..8faadbf 100644
--- a/code/mps.h
+++ b/code/mps.h
@@ -117,6 +117,7 @@ typedef mps_addr_t (*mps_fmt_skip_t)(mps_addr_t);
 typedef void (*mps_fmt_copy_t)(mps_addr_t, mps_addr_t);
 typedef void (*mps_fmt_fwd_t)(mps_addr_t, mps_addr_t);
 typedef mps_addr_t (*mps_fmt_isfwd_t)(mps_addr_t);
+typedef mps_bool_t (*mps_fmt_ispinned_t)(mps_addr_t);
 typedef void (*mps_fmt_pad_t)(mps_addr_t, size_t);
 typedef mps_addr_t (*mps_fmt_class_t)(mps_addr_t);
 
@@ -159,6 +160,7 @@ typedef struct mps_arg_s {
     mps_fmt_skip_t fmt_skip;
     mps_fmt_fwd_t fmt_fwd;
     mps_fmt_isfwd_t fmt_isfwd;
+    mps_fmt_ispinned_t fmt_ispinned;
     mps_fmt_pad_t fmt_pad;
     mps_fmt_class_t fmt_class;
     mps_pool_t pool;
@@ -253,6 +255,9 @@ extern const struct mps_key_s _mps_key_FMT_FWD;
 extern const struct mps_key_s _mps_key_FMT_ISFWD;
 #define MPS_KEY_FMT_ISFWD   (&_mps_key_FMT_ISFWD)
 #define MPS_KEY_FMT_ISFWD_FIELD fmt_isfwd
+extern const struct mps_key_s _mps_key_FMT_ISPINNED;
+#define MPS_KEY_FMT_ISPINNED   (&_mps_key_FMT_ISPINNED)
+#define MPS_KEY_FMT_ISPINNED_FIELD fmt_ispinned
 extern const struct mps_key_s _mps_key_FMT_PAD;
 #define MPS_KEY_FMT_PAD   (&_mps_key_FMT_PAD)
 #define MPS_KEY_FMT_PAD_FIELD fmt_pad
@@ -395,6 +400,7 @@ typedef struct mps_fmt_A_s {
   mps_fmt_copy_t  copy;
   mps_fmt_fwd_t   fwd;
   mps_fmt_isfwd_t isfwd;
+  mps_fmt_ispinned_t ispinned;
   mps_fmt_pad_t   pad;
 } mps_fmt_A_s;
 typedef struct mps_fmt_A_s *mps_fmt_A_t;
@@ -407,6 +413,7 @@ typedef struct mps_fmt_B_s {
   mps_fmt_copy_t  copy;
   mps_fmt_fwd_t   fwd;
   mps_fmt_isfwd_t isfwd;
+  mps_fmt_ispinned_t ispinned;
   mps_fmt_pad_t   pad;
   mps_fmt_class_t mps_class;
 } mps_fmt_B_s;
@@ -420,6 +427,7 @@ typedef struct mps_fmt_auto_header_s {
   mps_fmt_skip_t  skip;
   mps_fmt_fwd_t   fwd;
   mps_fmt_isfwd_t isfwd;
+  mps_fmt_ispinned_t ispinned;
   mps_fmt_pad_t   pad;
   size_t          mps_headerSize;
 } mps_fmt_auto_header_s;
@@ -430,6 +438,7 @@ typedef struct mps_fmt_fixed_s {
   mps_fmt_scan_t  scan;
   mps_fmt_fwd_t   fwd;
   mps_fmt_isfwd_t isfwd;
+  mps_fmt_ispinned_t ispinned;
   mps_fmt_pad_t   pad;
 } mps_fmt_fixed_s;
 
diff --git a/code/mpsi.c b/code/mpsi.c
index eab0deb..0d2ce74 100644
--- a/code/mpsi.c
+++ b/code/mpsi.c
@@ -573,6 +573,7 @@ mps_res_t mps_fmt_create_A(mps_fmt_t *mps_fmt_o,
     MPS_ARGS_ADD(args, MPS_KEY_FMT_SKIP, mps_fmt_A->skip);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_FWD, mps_fmt_A->fwd);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_ISFWD, mps_fmt_A->isfwd);
+    MPS_ARGS_ADD(args, MPS_KEY_FMT_ISPINNED, mps_fmt_A->ispinned);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_PAD, mps_fmt_A->pad);
     res = FormatCreate(&format, arena, args);
   } MPS_ARGS_END(args);
@@ -607,6 +608,7 @@ mps_res_t mps_fmt_create_B(mps_fmt_t *mps_fmt_o,
     MPS_ARGS_ADD(args, MPS_KEY_FMT_SKIP, mps_fmt_B->skip);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_FWD, mps_fmt_B->fwd);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_ISFWD, mps_fmt_B->isfwd);
+    MPS_ARGS_ADD(args, MPS_KEY_FMT_ISPINNED, mps_fmt_B->ispinned);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_PAD, mps_fmt_B->pad);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_CLASS, mps_fmt_B->mps_class);
     res = FormatCreate(&format, arena, args);
@@ -643,6 +645,7 @@ mps_res_t mps_fmt_create_auto_header(mps_fmt_t *mps_fmt_o,
     MPS_ARGS_ADD(args, MPS_KEY_FMT_SKIP, mps_fmt->skip);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_FWD, mps_fmt->fwd);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_ISFWD, mps_fmt->isfwd);
+    MPS_ARGS_ADD(args, MPS_KEY_FMT_ISPINNED, mps_fmt->ispinned);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_PAD, mps_fmt->pad);
     res = FormatCreate(&format, arena, args);
   } MPS_ARGS_END(args);
@@ -676,6 +679,7 @@ mps_res_t mps_fmt_create_fixed(mps_fmt_t *mps_fmt_o,
     MPS_ARGS_ADD(args, MPS_KEY_FMT_SCAN, mps_fmt_fixed->scan);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_FWD, mps_fmt_fixed->fwd);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_ISFWD, mps_fmt_fixed->isfwd);
+    MPS_ARGS_ADD(args, MPS_KEY_FMT_ISPINNED, mps_fmt_fixed->ispinned);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_PAD, mps_fmt_fixed->pad);
     res = FormatCreate(&format, arena, args);
   } MPS_ARGS_END(args);
diff --git a/code/poolamc.c b/code/poolamc.c
index c526c8e..ed14d8a 100644
--- a/code/poolamc.c
+++ b/code/poolamc.c
@@ -1592,6 +1592,29 @@ static Res amcSegFix(Seg seg, ScanState ss, Ref *refIO)
   /* .exposed.seg: Statements tagged ".exposed.seg" below require */
   /* that "seg" (that is: the 'from' seg) has been ShieldExposed. */
   ShieldExpose(arena, seg);
+  /* If the reference is pinned, set up the datastructures for */
+  /* managing a nailed segment.  This involves marking the segment */
+  /* as nailed, and setting up a per-word mark table */
+  if ((*format->isPinned)(ref)) {
+    ShieldCover(arena, seg);
+    /* .nail.new: Check to see whether we need a Nailboard for */
+    /* this seg.  We use "SegNailed(seg) == TraceSetEMPTY" */
+    /* rather than "!amcSegHasNailboard(seg)" because this avoids */
+    /* setting up a new nailboard when the segment was nailed, but */
+    /* had no nailboard.  This must be avoided because otherwise */
+    /* assumptions in amcSegFixEmergency will be wrong (essentially */
+    /* we will lose some pointer fixes because we introduced a */
+    /* nailboard). */
+    if(SegNailed(seg) == TraceSetEMPTY) {
+      res = amcSegCreateNailboard(seg);
+      if(res != ResOK)
+        return res;
+      STATISTIC(++ss->nailCount);
+      SegSetNailed(seg, TraceSetUnion(SegNailed(seg), ss->traces));
+    }
+    amcSegFixInPlace(seg, ss, refIO);
+    return ResOK;
+  }
   newRef = (*format->isMoved)(ref);  /* .exposed.seg */
 
   if(newRef == (Addr)0) {

@mgood7123
Copy link
Author

mgood7123 commented Oct 3, 2023

tho it can sometimes complete allocated and pinned 1 million objects in ~20 seconds (and in rare cases, ~1 seconds) tho i usually get around 60 seconds

tho it seems to vary between ~ 10 seconds and ~30 seconds

@mgood7123
Copy link
Author

mgood7123 commented Oct 3, 2023

using a stack-like approach seems to be a bit slower at the expense of pin/unpin order

around ~30 seconds

void reroot(ManagedObjState * state, managed_obj_t * new_pinned, int new_capacity) {
    mps_root_t new_pinned_root = NULL;
    mps_res_t res = mps_root_create_area_tagged(
        &new_pinned_root, state->arena, mps_rank_ambig(),
        (mps_rm_t)0, new_pinned, new_pinned+new_capacity,
        mps_scan_area_tagged, sizeof(mps_word_t) - 1, (mps_word_t)0
    );
    if (res != MPS_RES_OK) managed_obj_error("Couldn't create pinned root");
    // both are pinned, we can safely destroy old pin
    mps_root_destroy(state->pinned_root);
    state->pinned_root = new_pinned_root;
    free(state->pinned);
    state->pinned = new_pinned;
    state->pinned_capacity = new_capacity;
}

void managed_obj_push_pin(ManagedObjState * state, managed_obj_t obj) {
  if (state->pinned_used == state->pinned_capacity) {
    // reallocate
    int new_capacity = state->pinned_capacity * 2;
    managed_obj_t * new_pinned = calloc(sizeof(managed_obj_t), new_capacity);
    if (new_pinned == NULL) managed_obj_error("Couldn't allocate pinned memory");
    // copy old pinned to new pinned
    memcpy(new_pinned, state->pinned, sizeof(managed_obj_t)*state->pinned_capacity);
    reroot(state, new_pinned, new_capacity);
  }
  state->pinned[state->pinned_used++] = obj;
}

void managed_obj_pop_pin(ManagedObjState * state) {
  state->pinned[--state->pinned_used] = NULL;
  // always keep at least 2 elements
  if (state->pinned_capacity > 2 && state->pinned_used == state->pinned_capacity / 2) {
    // reallocate
    int new_capacity = state->pinned_capacity / 2;
    managed_obj_t * new_pinned = malloc(sizeof(managed_obj_t)*new_capacity);
    if (new_pinned == NULL) managed_obj_error("Couldn't allocate pinned memory");
    // copy old pinned to new pinned
    memcpy(new_pinned, state->pinned, sizeof(managed_obj_t)*new_capacity);
    reroot(state, new_pinned, new_capacity);
  }
}

@mgood7123
Copy link
Author

mgood7123 commented Oct 3, 2023

both appear to be equally fast, maybe due to cpu caching ?

@mgood7123
Copy link
Author

tho im running on COOL build, ill try RASH build

@mgood7123
Copy link
Author

on RASH i seem to get ~8 seconds for 1 million pinned allocs

@rptb1
Copy link
Member

rptb1 commented Oct 23, 2023

tho im running on COOL build, ill try RASH build

Please let us know if you are seeing a significant difference between HOT and RASH. The RASH build only really exists as a comparison, to ensure that HOT is fast enough. We strongly discourage the use of RASH, either in development or production, because heap corruption is extremely hard to debug if left around for any amount of time.

@mgood7123
Copy link
Author

mgood7123 commented Oct 26, 2023

sorry i was busy, i ended up integrating object pinning directly into the MPS system for poolamc

usage:

    MPS_ARGS_ADD(args, MPS_KEY_FMT_ISFWD, managed_obj_isfwd);
    MPS_ARGS_ADD(args, MPS_KEY_FMT_ISPINNED, managed_obj_ispinned);
    MPS_ARGS_ADD(args, MPS_KEY_FMT_PAD, managed_obj_pad);
inline void managed_obj_pin(ManagedObjState * state, managed_obj_t obj) {
  obj->type.pinned = TRUE;
  state->pinned_used++;
}

inline void managed_obj_unpin(ManagedObjState * state, managed_obj_t obj) {
  obj->type.pinned = FALSE;
  state->pinned_used--;
}
static mps_bool_t managed_obj_ispinned(mps_addr_t addr)
{
  managed_obj_t obj = (managed_obj_t)addr;
  return obj->type.pinned;
}

@mgood7123
Copy link
Author

mgood7123 commented Oct 26, 2023

from commit 554513b

HEAD is now at 554513b16 Merging branch branch/2023-04-13/transforms for GitHub pull request #214 <https://github.com/Ravenbrook/mps/pull/214>.

diff --git a/mps_raven/code/config.h b/mps_mgood/code/config.h
index 02d335e..5561bec 100644
--- a/mps_raven/code/config.h
+++ b/mps_mgood/code/config.h
@@ -353,6 +353,7 @@
 #define FMT_SKIP_DEFAULT (&FormatNoSkip)
 #define FMT_FWD_DEFAULT (&FormatNoMove)
 #define FMT_ISFWD_DEFAULT (&FormatNoIsMoved)
+#define FMT_ISPINNED_DEFAULT (&FormatNoIsPinned)
 #define FMT_PAD_DEFAULT (&FormatNoPad)
 #define FMT_CLASS_DEFAULT (&FormatDefaultClass)
 
diff --git a/mps_raven/code/fmtdy.c b/mps_mgood/code/fmtdy.c
index 6cfbca6..d43763a 100644
--- a/mps_raven/code/fmtdy.c
+++ b/mps_mgood/code/fmtdy.c
@@ -771,6 +771,7 @@ static struct mps_fmt_A_s dylan_fmt_A_s =
   dylan_copy,
   dylan_fwd,
   dylan_isfwd,
+  no_ispinned,
   dylan_pad
 };
 
@@ -782,6 +783,7 @@ static struct mps_fmt_B_s dylan_fmt_B_s =
   dylan_copy,
   dylan_fwd,
   dylan_isfwd,
+  no_ispinned,
   dylan_pad,
   dylan_class
 };
@@ -816,6 +818,7 @@ static struct mps_fmt_A_s dylan_fmt_A_weak_s =
   no_copy,
   no_fwd,
   no_isfwd,
+  no_ispinned,
   no_pad
 };
 
@@ -827,6 +830,7 @@ static struct mps_fmt_B_s dylan_fmt_B_weak_s =
   no_copy,
   no_fwd,
   no_isfwd,
+  no_ispinned,
   no_pad,
   dylan_class
 };
diff --git a/mps_raven/code/fmthe.c b/mps_mgood/code/fmthe.c
index 76f7239..0dfd422 100644
--- a/mps_raven/code/fmthe.c
+++ b/mps_mgood/code/fmthe.c
@@ -138,6 +138,7 @@ static struct mps_fmt_auto_header_s HeaderFormat =
   dylan_header_skip,
   NULL, /* later overwritten by dylan format forward method */
   dylan_header_isfwd,
+  no_ispinned,
   dylan_header_pad,
   (size_t)headerSIZE
 };
@@ -152,6 +153,7 @@ static struct mps_fmt_auto_header_s HeaderWeakFormat =
   dylan_header_skip,
   no_fwd,
   no_isfwd,
+  no_ispinned,
   no_pad,
   (size_t)headerSIZE
 };
diff --git a/mps_raven/code/fmtno.c b/mps_mgood/code/fmtno.c
index 4854e55..a655d06 100644
--- a/mps_raven/code/fmtno.c
+++ b/mps_mgood/code/fmtno.c
@@ -59,6 +59,13 @@ mps_addr_t no_isfwd(mps_addr_t object)
     return 0;
 }
 
+mps_bool_t no_ispinned(mps_addr_t object)
+{
+    unused(object);
+    notreached();
+    return FALSE;
+}
+
 void no_pad(mps_addr_t addr,
             size_t size)
 {
@@ -83,6 +90,7 @@ static struct mps_fmt_A_s no_fmt_A_s =
     no_copy,
     no_fwd,
     no_isfwd,
+    no_ispinned,
     no_pad
 };
 
@@ -94,6 +102,7 @@ static struct mps_fmt_B_s no_fmt_B_s =
     no_copy,
     no_fwd,
     no_isfwd,
+    no_ispinned,
     no_pad,
     no_class
 };
diff --git a/mps_raven/code/fmtno.h b/mps_mgood/code/fmtno.h
index c7afc1d..6e8dcd5 100644
--- a/mps_raven/code/fmtno.h
+++ b/mps_mgood/code/fmtno.h
@@ -14,6 +14,7 @@ extern mps_addr_t no_skip(mps_addr_t);
 extern void no_copy(mps_addr_t, mps_addr_t);
 extern void no_fwd(mps_addr_t, mps_addr_t);
 extern mps_addr_t no_isfwd(mps_addr_t);
+extern mps_bool_t no_ispinned(mps_addr_t);
 extern void no_pad(mps_addr_t, size_t);
 extern mps_addr_t no_class(mps_addr_t);
 
diff --git a/mps_raven/code/format.c b/mps_mgood/code/format.c
index 8d51357..a9515a4 100644
--- a/mps_raven/code/format.c
+++ b/mps_mgood/code/format.c
@@ -31,6 +31,7 @@ Bool FormatCheck(Format format)
   CHECKL(FUNCHECK(format->skip));
   CHECKL(FUNCHECK(format->move));
   CHECKL(FUNCHECK(format->isMoved));
+  CHECKL(FUNCHECK(format->isPinned));
   CHECKL(FUNCHECK(format->pad));
   CHECKL(FUNCHECK(format->klass));
 
@@ -70,6 +71,13 @@ static mps_addr_t FormatNoIsMoved(mps_addr_t object)
     return NULL;
 }
 
+static mps_bool_t FormatNoIsPinned(mps_addr_t object)
+{
+    UNUSED(object);
+    NOTREACHED;
+    return FALSE;
+}
+
 static void FormatNoPad(mps_addr_t addr, size_t size)
 {
     UNUSED(addr);
@@ -92,6 +100,7 @@ ARG_DEFINE_KEY(FMT_SCAN, Fun);
 ARG_DEFINE_KEY(FMT_SKIP, Fun);
 ARG_DEFINE_KEY(FMT_FWD, Fun);
 ARG_DEFINE_KEY(FMT_ISFWD, Fun);
+ARG_DEFINE_KEY(FMT_ISPINNED, Fun);
 ARG_DEFINE_KEY(FMT_PAD, Fun);
 ARG_DEFINE_KEY(FMT_HEADER_SIZE, Size);
 ARG_DEFINE_KEY(FMT_CLASS, Fun);
@@ -108,6 +117,7 @@ Res FormatCreate(Format *formatReturn, Arena arena, ArgList args)
   mps_fmt_skip_t fmtSkip = FMT_SKIP_DEFAULT;
   mps_fmt_fwd_t fmtFwd = FMT_FWD_DEFAULT;
   mps_fmt_isfwd_t fmtIsfwd = FMT_ISFWD_DEFAULT;
+  mps_fmt_ispinned_t fmtIspinned = FMT_ISPINNED_DEFAULT;
   mps_fmt_pad_t fmtPad = FMT_PAD_DEFAULT;
   mps_fmt_class_t fmtClass = FMT_CLASS_DEFAULT;
 
@@ -127,6 +137,8 @@ Res FormatCreate(Format *formatReturn, Arena arena, ArgList args)
     fmtFwd = arg.val.fmt_fwd;
   if (ArgPick(&arg, args, MPS_KEY_FMT_ISFWD))
     fmtIsfwd = arg.val.fmt_isfwd;
+  if (ArgPick(&arg, args, MPS_KEY_FMT_ISPINNED))
+    fmtIspinned = arg.val.fmt_ispinned;
   if (ArgPick(&arg, args, MPS_KEY_FMT_PAD))
     fmtPad = arg.val.fmt_pad;
   if (ArgPick(&arg, args, MPS_KEY_FMT_CLASS))
@@ -146,6 +158,7 @@ Res FormatCreate(Format *formatReturn, Arena arena, ArgList args)
   format->skip = fmtSkip;
   format->move = fmtFwd;
   format->isMoved = fmtIsfwd;
+  format->isPinned = fmtIspinned;
   format->pad = fmtPad;
   format->klass = fmtClass;
 
@@ -206,6 +219,7 @@ Res FormatDescribe(Format format, mps_lib_FILE *stream, Count depth)
                "  skip $F\n", (WriteFF)format->skip,
                "  move $F\n", (WriteFF)format->move,
                "  isMoved $F\n", (WriteFF)format->isMoved,
+               "  isPinned $F\n", (WriteFF)format->isPinned,
                "  pad $F\n", (WriteFF)format->pad,
                "  headerSize $W\n", (WriteFW)format->headerSize,
                "} Format $P ($U)\n", (WriteFP)format, (WriteFU)format->serial,
diff --git a/mps_raven/code/mpmst.h b/mps_mgood/code/mpmst.h
index f5ba00b..4da18f0 100644
--- a/mps_raven/code/mpmst.h
+++ b/mps_mgood/code/mpmst.h
@@ -361,6 +361,7 @@ typedef struct mps_fmt_s {
   mps_fmt_skip_t skip;
   mps_fmt_fwd_t move;
   mps_fmt_isfwd_t isMoved;
+  mps_fmt_ispinned_t isPinned;
   mps_fmt_pad_t pad;
   mps_fmt_class_t klass;        /* pointer indicating class */
   Size headerSize;              /* size of header */
diff --git a/mps_raven/code/mps.h b/mps_mgood/code/mps.h
index 700c414..8faadbf 100644
--- a/mps_raven/code/mps.h
+++ b/mps_mgood/code/mps.h
@@ -117,6 +117,7 @@ typedef mps_addr_t (*mps_fmt_skip_t)(mps_addr_t);
 typedef void (*mps_fmt_copy_t)(mps_addr_t, mps_addr_t);
 typedef void (*mps_fmt_fwd_t)(mps_addr_t, mps_addr_t);
 typedef mps_addr_t (*mps_fmt_isfwd_t)(mps_addr_t);
+typedef mps_bool_t (*mps_fmt_ispinned_t)(mps_addr_t);
 typedef void (*mps_fmt_pad_t)(mps_addr_t, size_t);
 typedef mps_addr_t (*mps_fmt_class_t)(mps_addr_t);
 
@@ -159,6 +160,7 @@ typedef struct mps_arg_s {
     mps_fmt_skip_t fmt_skip;
     mps_fmt_fwd_t fmt_fwd;
     mps_fmt_isfwd_t fmt_isfwd;
+    mps_fmt_ispinned_t fmt_ispinned;
     mps_fmt_pad_t fmt_pad;
     mps_fmt_class_t fmt_class;
     mps_pool_t pool;
@@ -253,6 +255,9 @@ extern const struct mps_key_s _mps_key_FMT_FWD;
 extern const struct mps_key_s _mps_key_FMT_ISFWD;
 #define MPS_KEY_FMT_ISFWD   (&_mps_key_FMT_ISFWD)
 #define MPS_KEY_FMT_ISFWD_FIELD fmt_isfwd
+extern const struct mps_key_s _mps_key_FMT_ISPINNED;
+#define MPS_KEY_FMT_ISPINNED   (&_mps_key_FMT_ISPINNED)
+#define MPS_KEY_FMT_ISPINNED_FIELD fmt_ispinned
 extern const struct mps_key_s _mps_key_FMT_PAD;
 #define MPS_KEY_FMT_PAD   (&_mps_key_FMT_PAD)
 #define MPS_KEY_FMT_PAD_FIELD fmt_pad
@@ -395,6 +400,7 @@ typedef struct mps_fmt_A_s {
   mps_fmt_copy_t  copy;
   mps_fmt_fwd_t   fwd;
   mps_fmt_isfwd_t isfwd;
+  mps_fmt_ispinned_t ispinned;
   mps_fmt_pad_t   pad;
 } mps_fmt_A_s;
 typedef struct mps_fmt_A_s *mps_fmt_A_t;
@@ -407,6 +413,7 @@ typedef struct mps_fmt_B_s {
   mps_fmt_copy_t  copy;
   mps_fmt_fwd_t   fwd;
   mps_fmt_isfwd_t isfwd;
+  mps_fmt_ispinned_t ispinned;
   mps_fmt_pad_t   pad;
   mps_fmt_class_t mps_class;
 } mps_fmt_B_s;
@@ -420,6 +427,7 @@ typedef struct mps_fmt_auto_header_s {
   mps_fmt_skip_t  skip;
   mps_fmt_fwd_t   fwd;
   mps_fmt_isfwd_t isfwd;
+  mps_fmt_ispinned_t ispinned;
   mps_fmt_pad_t   pad;
   size_t          mps_headerSize;
 } mps_fmt_auto_header_s;
@@ -430,6 +438,7 @@ typedef struct mps_fmt_fixed_s {
   mps_fmt_scan_t  scan;
   mps_fmt_fwd_t   fwd;
   mps_fmt_isfwd_t isfwd;
+  mps_fmt_ispinned_t ispinned;
   mps_fmt_pad_t   pad;
 } mps_fmt_fixed_s;
 
diff --git a/mps_raven/code/mpsi.c b/mps_mgood/code/mpsi.c
index eab0deb..0d2ce74 100644
--- a/mps_raven/code/mpsi.c
+++ b/mps_mgood/code/mpsi.c
@@ -573,6 +573,7 @@ mps_res_t mps_fmt_create_A(mps_fmt_t *mps_fmt_o,
     MPS_ARGS_ADD(args, MPS_KEY_FMT_SKIP, mps_fmt_A->skip);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_FWD, mps_fmt_A->fwd);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_ISFWD, mps_fmt_A->isfwd);
+    MPS_ARGS_ADD(args, MPS_KEY_FMT_ISPINNED, mps_fmt_A->ispinned);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_PAD, mps_fmt_A->pad);
     res = FormatCreate(&format, arena, args);
   } MPS_ARGS_END(args);
@@ -607,6 +608,7 @@ mps_res_t mps_fmt_create_B(mps_fmt_t *mps_fmt_o,
     MPS_ARGS_ADD(args, MPS_KEY_FMT_SKIP, mps_fmt_B->skip);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_FWD, mps_fmt_B->fwd);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_ISFWD, mps_fmt_B->isfwd);
+    MPS_ARGS_ADD(args, MPS_KEY_FMT_ISPINNED, mps_fmt_B->ispinned);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_PAD, mps_fmt_B->pad);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_CLASS, mps_fmt_B->mps_class);
     res = FormatCreate(&format, arena, args);
@@ -643,6 +645,7 @@ mps_res_t mps_fmt_create_auto_header(mps_fmt_t *mps_fmt_o,
     MPS_ARGS_ADD(args, MPS_KEY_FMT_SKIP, mps_fmt->skip);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_FWD, mps_fmt->fwd);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_ISFWD, mps_fmt->isfwd);
+    MPS_ARGS_ADD(args, MPS_KEY_FMT_ISPINNED, mps_fmt->ispinned);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_PAD, mps_fmt->pad);
     res = FormatCreate(&format, arena, args);
   } MPS_ARGS_END(args);
@@ -676,6 +679,7 @@ mps_res_t mps_fmt_create_fixed(mps_fmt_t *mps_fmt_o,
     MPS_ARGS_ADD(args, MPS_KEY_FMT_SCAN, mps_fmt_fixed->scan);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_FWD, mps_fmt_fixed->fwd);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_ISFWD, mps_fmt_fixed->isfwd);
+    MPS_ARGS_ADD(args, MPS_KEY_FMT_ISPINNED, mps_fmt_fixed->ispinned);
     MPS_ARGS_ADD(args, MPS_KEY_FMT_PAD, mps_fmt_fixed->pad);
     res = FormatCreate(&format, arena, args);
   } MPS_ARGS_END(args);
diff --git a/mps_raven/code/poolamc.c b/mps_mgood/code/poolamc.c
index c526c8e..ed14d8a 100644
--- a/mps_raven/code/poolamc.c
+++ b/mps_mgood/code/poolamc.c
@@ -1592,6 +1592,29 @@ static Res amcSegFix(Seg seg, ScanState ss, Ref *refIO)
   /* .exposed.seg: Statements tagged ".exposed.seg" below require */
   /* that "seg" (that is: the 'from' seg) has been ShieldExposed. */
   ShieldExpose(arena, seg);
+  /* If the reference is pinned, set up the datastructures for */
+  /* managing a nailed segment.  This involves marking the segment */
+  /* as nailed, and setting up a per-word mark table */
+  if ((*format->isPinned)(ref)) {
+    ShieldCover(arena, seg);
+    /* .nail.new: Check to see whether we need a Nailboard for */
+    /* this seg.  We use "SegNailed(seg) == TraceSetEMPTY" */
+    /* rather than "!amcSegHasNailboard(seg)" because this avoids */
+    /* setting up a new nailboard when the segment was nailed, but */
+    /* had no nailboard.  This must be avoided because otherwise */
+    /* assumptions in amcSegFixEmergency will be wrong (essentially */
+    /* we will lose some pointer fixes because we introduced a */
+    /* nailboard). */
+    if(SegNailed(seg) == TraceSetEMPTY) {
+      res = amcSegCreateNailboard(seg);
+      if(res != ResOK)
+        return res;
+      STATISTIC(++ss->nailCount);
+      SegSetNailed(seg, TraceSetUnion(SegNailed(seg), ss->traces));
+    }
+    amcSegFixInPlace(seg, ss, refIO);
+    return ResOK;
+  }
   newRef = (*format->isMoved)(ref);  /* .exposed.seg */
 
   if(newRef == (Addr)0) {

@rptb1
Copy link
Member

rptb1 commented Nov 9, 2023

The idea of pinning objects based on their contents is very interesting! Thanks for telling us about this.

Can you explain what the use case for this is? Are you setting a "pin bit" in an object before it's passed through an FFI to an unregistered thread, or in shared memory to another process?

@mgood7123
Copy link
Author

mgood7123 commented Nov 9, 2023

Are you setting a "pin bit" in an object before it's passed through an FFI to an unregistered thread, or in shared memory to another process?

yes

@mgood7123
Copy link
Author

any update ?

@mgood7123
Copy link
Author

sorry i was busy, i ended up integrating object pinning directly into the MPS system for poolamc

usage:

    MPS_ARGS_ADD(args, MPS_KEY_FMT_ISFWD, managed_obj_isfwd);
    MPS_ARGS_ADD(args, MPS_KEY_FMT_ISPINNED, managed_obj_ispinned);
    MPS_ARGS_ADD(args, MPS_KEY_FMT_PAD, managed_obj_pad);
inline void managed_obj_pin(ManagedObjState * state, managed_obj_t obj) {
  obj->type.pinned = TRUE;
  state->pinned_used++;
}

inline void managed_obj_unpin(ManagedObjState * state, managed_obj_t obj) {
  obj->type.pinned = FALSE;
  state->pinned_used--;
}
static mps_bool_t managed_obj_ispinned(mps_addr_t addr)
{
  managed_obj_t obj = (managed_obj_t)addr;
  return obj->type.pinned;
}

by the way, direct integration has almost zero performance impact for actually pinning and unpinning objects

since we simply set a boolean value to true/false for each object we wish to pin/unpin

which is a lot faster than managing a dedicated array/memory/pool for pinned objects

@mgood7123
Copy link
Author

any thoughts on eventually adding this ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
optional Will cause failures / of benefit. Worth assigning resources. question
Projects
None yet
Development

No branches or pull requests

4 participants