The docs say that to allow a role to upload files, i would need to grant insert permissions on the `storage.files` table and select the `id` column, but i can't get it working when i try to limit bucket uploads.
Last active 4 months ago
100 replies
8 views
- SH
The docs say that to allow a role to upload files, i would need to grant insert permissions on the
storage.files
table and select theid
column, but i can't get it working when i try to limit bucket uploads. - SH
i have a bucket
my_bucket
in the storage.buckets table. i tried to add insert permissions for theuser
role set to custom check:{"bucket_id":{"_eq":"my_bucket"}}
- SH
it gives me a 500 with
"[problem processing request: problem initializing file metadata: Message: check constraint of an insert/update permission has failed,
- SH
i tried adding permissions for all columns, not just the
id
column, but that doesn't help - SH
i am sending the bucket ID in the call to nhost.storage.upload as well
nhost.storage.upload({ file, name: encodeURIComponent(file.name), bucketId: 'my_bucket', })
- SH
what else am i supposed to do to make this work? i can only get it to work if i completely remove any permission checks on the user role, which i obviously don't want. (also, is it expected that permission errors give 500 responses? i would have expected 4XX codes)
- SH
in fact it seems as though none of the permissions work as expected. e.g. i can set permissions like this and it allows the file to be uploaded even though the file clearly does not match the permission check
- SH
can someone let me know if i'm misunderstanding how to use the storage.files permissions? if i'm not misunderstanding, then my experemintation is showing that the permissions behavior is quite buggy/broken right now, at least for local development. here are my findings:
- SH
TEST: NO permissions for
user
role insert:
RESULT: 403 Error
EXPECTED: Yesnhost-backend-final-result-gifted_lehmann-storage-1 | time="2022-11-10T23:32:17Z" level=error msg="call completed with some errors" client_ip=xxx errors="[problem processing request: problem initializing file metadata: Message: field 'insertFiles' not found in type: 'mutation_root', Locations: []]" latency_time=22.620875ms method=POST status_code=403 url=/v1/storage/files
TEST: Add insert permissions for
user
role and don't include any columns:
RESULT: 403 Error
EXPECTED: Yesnhost-backend-final-result-gifted_lehmann-storage-1 | time="2022-11-10T23:36:11Z" level=error msg="call completed with some errors" client_ip=xxx errors="[problem processing request: problem initializing file metadata: Message: field 'id' not found in type: 'files_insert_input', Locations: []]" latency_time=37.885333ms method=POST status_code=403 url=/v1/storage/files
TEST: Add insert permissions for
user
and allowid
column
RESULT: SuccessEXPECTED: Yes
TEST: Add a mime type check that should fail (Custom check:
{"mime_type":{"_eq":"NOT_A_REAL_MIME_TYPE"}}
)
Result: SuccessExpected: No!
TEST: Add a size check that should fail (Custom check:
{"size":{"_eq":-1093}}
)
Result: SuccessExpected: No!
TEST: Add a bucket ID check that should succeed (Custom check:
{"bucket_id":{"_eq":"my_bucket"}}
)
Result: 500 Errornhost-backend-final-result-gifted_lehmann-storage-1 | time="2022-11-10T23:39:31Z" level=error msg="call completed with some errors" client_ip=xxx errors="[problem processing request: problem initializing file metadata: Message: check constraint of an insert/update permission has failed, Locations: []]" latency_time=68.364167ms method=POST status_code=500 url=/v1/storage/files
Expected: No!
TEST: Add a general bucket ID check that only checks null (Custom check:
{"bucket_id":{"_is_null":false}}
)
Result: SuccessExpected: Yes
- DA
permissions are not determined by hasura-storage, permissions are determined by hasura itself. If the user making the request can't perform the needed action in the files table then hasura will fail
- SH
but i've added the permissions in the files table
- SH
i'm just showing the result in these cases with the permission checks
- SH
like for example if i set a mime_type check in the user permissions for the storage.files table, shouldn't it block the upload?
- SH
or a bucket_id check?
- DA
this is the query being done by hasura-storage to check permissions when uploading a new file:
- DA
mutation InitializeFile($id: uuid) {insertFiles(objects: {id: $id}) {affected_rows}}
- DA
my suggestion would be to try it directly as a user you are having problems with
- SH
i am trying that
- SH
i'm uploading with the browser to test this
- SH
the browser just doesn't get the full storage error so i was also pasting the logs from the nhost storage container
- DA
yes, but if you try running
mutation InitializeFile($id: uuid) {insertFiles(objects: {id: $id}) {affected_rows}}
in the hasura console directly it is going to break - SH
here's an example setup
- SH
my expectation is that it should not allow other mime types, but it does
- SH
and similarly, when i try to limit the bucket, it simply doesn't allow the upload
- SH
but my understanding is that these permissions should be allowing or blocking the user from uploading
- SH
it does seem to at least be pulling some of the permissions because it blokcs the upload with that 403 if i remove the user insert permissions entirely (as expected)
- SH
but it doesn't actually seem to apply any of the custom checks properly
- SH
the most important one is i tried to set bucket_id _eq to my bucket ID that i set up in storage.buckets, but it won't allow the upload even though i'm passing the buckte ID to the upload call on the client
- DA
I know, I was just suggesting to try that in the console because that would reproduce the behaviour that hasura-storage performs
- DA
but I understand your problem as it is not a trivial one
- DA
we should probably disucss this in a github issue with more people to see what the best solution is
- SH
ok, so just to understand, the way it works now is expected behavior?
- SH
the only thing it does is check for ID insert permission?
- SH
i had thought the point of the metadata table was to manage permissions (and i've seen people in the discord recommending using the bucket check as a way to determine who can upload to what bucket)
- DA
yeah, exactly, which isn't ideal for use cases like yours (which I believe we should support) so I think we need to discuss a solution
- SH
hmmm yeah that's super confusing
- SH
because all the fields are avaliable as permissions
- SH
i had thought that's the reason for the metadata so you can limit uploads directly with the permissions
- DA
I agree, if you don't mind opening an issue in hasura-storage we can discuss a solution there (otherwise I will open it myself)
- DA
it is not a hard fix but we decided to do it this way because otherwise we need to fully process a file before even knowing if we can upload it (i.e. compute etag, file size, mimeType, etc)
- DA
so it was a case of "functionality" vs "trying to make it more performant in the case of people uploading where they shouldn't"
- SH
but then how do i use the buckets?
- SH
that means that i can't stop people from uploading to specific buckets
- SH
it's all or nothing basically
- DA
as a workardound while we decide on fix you can try to limit the update permissions for the admin user
- SH
what do you mean?
- SH
sorry not sure i understand the workaround
- SH
i have several buckets and many of them are meant to be read-only for users, but i have 2 buckets that i want users to be able to upload to. i don't want them to be able to upload to any other bucket
- SH
so i was trying to just write insert permissions like bucketid: {in: [x, y, z]} which would have been perfect
- DA
uploading a file right now is three steps:
- insert metadata to check user (partial) permissions:
mutation InitializeFile($id: uuid) {insertFiles(objects: {id: $id}) {affected_rows}}
- upload file to s3
- update the row inserted perviously as admin
updateFile(pk_columns: {id: $id}, _set: {bucketId: $bucketId, etag: $etag, isUploaded: $isUploaded, mimeType: $mimeType, name: $name, size: $size})
- insert metadata to check user (partial) permissions:
- DA
so the workaround would be to limit (3) by just setting the permissions for the admin role in the update case
- SH
i don't think you can set permissions for the admin role though?
- DA
if you are not in a hurry I can prioritize this issue so between discussing the best approach and implementing maybe we can have a fix mid-late next week
- SH
it's just fully granted for everything
- SH
at least in hasura console
- SH
unless there is somewhere else i can set the permissions
- DA
you are probably right and it's probably not possible
- DA
if you can I'd suggest ignoring this issue for a day or two while we discuss it on a github issue. We may have it fixed by the end of next week
- DA
(afk for about 45min)
- SH
heh we are indeed in a hurry bc we are trying to do our migration to nhost and are in the final stages of trying to test and convert. but that would be really good if it can be fixed soon, since i think probably other users would also expect the storage permissions to work this way
- SH
even if the mime type etc is harder to implement, the bucket_id is known at the time of upload (though would be best if all of those fields can work, but if not then the documentation could be clearer about it)
- SH
one thing that i think is interesting though is that bucketid is being checked in some way, it's just not actually matching my bucketid from my storage.buckets table
- SH
because if i do a custom check
{"bucket_id":{"_is_null":false}}
and pass my bucket_id in the browser upload, the upload succeeds but if i do{"bucket_id":{"_is_null":true}}
the upload fails - SH
so something is running that custom check
- DA
that's because the bucketid is never empty, if you look at the table definition bucketid defaults to
default
if empty - DA
FYI, the fix is a bit simpler than I initially thought and I just had a discussion with Johan about it and we agreed it is the right thing to do so we will be fixing this soon
- SH
so it's ignoring the bucket_id i'm passing then and only updating it in the admin call
- SH
?
- SH
sorry, i meant in the permission check
- SH
(just tested it and if i set bucket_id _eq "default" it passes so i understand what you're saying)
- SH
because the initial insert omits everything but the ID
- SH
it makes the files permissions much more powerful
- SH
(and less confusing π )
- DA
agreed, that's why community feedback is important, after all, features are for you, not for us :P
- DA
just a quick update, hasura-storage 0.3.0 has been released. This version should allow you to do the permission checks you are trying to do, otherwise, let us know
- DA
don't forget to update the CLI to test locally
- SH
great, updated the cli and am trying it now!
- SH
bucket_id check seems to be working as expected now!
- SH
(at least for insert. the docs need to be updated though because now permission to insert needs to be granted to additional columns, not just
id
-- i think maybe name, bucketid, size, mimetype? i assume etag and uploadedbyuserid should still only be inserted by admin and i know uploadedbyuserid is marked as deprecated elsewhere) - EL
That's correct
- SH
is there a recommended way to send header variables to the storage API without running into cors errors?
- SH
i want to set files permissions with the custom session variables (sent via header by the public role) but it seems like i then need to send that header to the public files URL and it creates CORS errors in the browser
- SH
hasura allows session variables to be sent in the header (currently hasura won't allow header session variables when using JWT auth but are allowed with the unauthenticated role). i'm guessing CORS with nhost won't work when trying to GET the storage files route unless the list of allowed headers is configurable? https://github.com/nhost/hasura-storage/blob/main/controller/controller.go#LL131
- EL
Sounds like a possible bug om our side.
- MA
Having both CORS-allowed domains and allowed headers be configurable would be pretty fantastic (Hello nhost! Just de-lurking for a second - thank you for all your work! π )
- SH
the storage client needs to accept custom headers also for
getPresignedUrl
calls for this reason (though it still won't work fully until nhost allows cors customization more generally for the backend) - DA
yeah, exactly, you'd need to be able to configure those two parameters. However, keep in mind though that headers aren't really authenticated, signed or anything. If you use headers to provide extra permissions anyone with a valid user could spoof those easily
- DA
a better approach would be to use custom claims in the JWT as the payload is signed and can't be spoofed without the jwt secret
- SH
i know the header is not secure, but the only point of the header in this case is to pass a link ID
- SH
so the ID itself is the authentication.
- SH
it's not possible for me to use custom claims because these are unauthenticated users
- SH
they don't have JWTs and the link ID is a dynamic variable -- the only possibility is session headers (which Hasura does allow to be sent when you don't have a JWT, with the understanding though that it's not secure for most types of permissions)
- DA
in order to support this we'd need to do some changes in our backend infrastructure so I'd suggest creating an issue in the nhost/nhost repo to discuss allowing setting your own headers/domains for CORS validation
- EL
Sounds like this is something we should support, so please @sheena create an issue and we'll prioritize it.
- DA
I have an update regarding custom headers. While the right solution is to allow you to configure which custom headers and origin domains you want to allow, that requires a bit more of development effort. In the meantime, I have a proposed patch to dynamically allow any
X-Hasura-*
header that I'd like to you to test before merging the PR: https://github.com/nhost/hasura-storage/pull/135If this is suitable for you let's synch and find a time so I can update your app so you can test
- EL
just pinging @sheena so she doesn't miss this message
- SH
great, thank you @david.barroso how would be best to test? we are trying to focus on getting our migration done and we don't have our final production project migrated yet, but i could test on our current test project next week sometime
- SH
(we're using a workaround for now by routing the storage request through an nhost serverless function)
- DA
I think you could configure the cli the use the version
v0.4.0-alpha0
and test locally first. I can also update your application for you to test if/when you want. Iβd like some thorough testing with a real app like yours before merging the PR to make sure there are no edge cases or weird behavior - BE
I'm about to do the same bucket_id based permission checks, let's see if it works now π
Last active 4 months ago
100 replies
8 views