› Forums › OCP Community Discussion Forum › Endpoint Query based on hostname
- This topic has 16 replies, 2 voices, and was last updated 2 years, 10 months ago by G25.
-
AuthorPosts
-
-
October 14, 2021 at 6:09 pm #517G25Participant
Hello!
Appreciating so much your work here with such a incredible and powerful tool!
I was fighting a few hours with the creation of a new “EndpointQuery” file search to get a output with the hostname field on “Node” class-name, but i wasn’t successful…
If we talk about the “ipv4NoSSH” file for example, we get 4 times repeat this file because we duplicate the root openContentPlatform folder to become the 3 clients plus the root server, but, how can I create o modify this file to get the list of hostnames on “Node” class-name instead of the IPv4?
My goals are:
1.- Get the “find_powershell” job work throw FQDN/Hostname instead of IPv4 to ensure the New-PSSesion works correctly to find endpoints to target (we can’t do it throw IPAddress).
2.- Enable the “powershell_app_components_base” job to list the software on every endpoint we discover.
I tried several things in order to get it done, but I’m still trying to understand how OCP works, and for example why if I delete the 4 “ipv4NoSSH” files, how can the rest of the jobs which uses this EndpointQuery continue working? If there are no files to read the EndpointQuery!
Hope you can help me with this.
Best regards and have a nive day!!
-
October 14, 2021 at 7:22 pm #518codingadvocateParticipant
There were several questions/directions in that first post. I’ll try to get to most of them here.
If you want a job to use FQDN (from either the connected Node or via DNS resolution), then put a “connectViaFQDN” parameter into the parameters list on the job, and set it to true. Like this, for the find_PowerShell job:
“inputParameters” : {
“connectViaFQDN”: true,
“portTestBeforeConnectionAttempt” : true,
“portsToTest” : [5985, 5986],
“printDebug” : false
},FWIW, this used to be controlled by the OS Parameters config, but it was necessary to make it job configurable.
With that parameter set to true, the protocolWrapper.py script calls the getFQDN function, during the protocol connection attempt.
I don’t understand the request for powershell_app_components_base. It already passes the previously discovered software through. And after you run the discovery, you can do an API query (or setup an integration) to do the same… to pull the node-to-software content. If you’re just looking for a report, then run the “report_node_details” job from the universalJob service. It will create an Excel workbook with sample default discovered results for you… including software.
You can have non-IP based endpoint query… you just have to make sure other things are setup/aligned to allow it to pass through. This is more detail which I’ll hold off on typing, since I believe the above content gets you on the road again.
-
October 14, 2021 at 10:22 pm #519G25Participant
OMG such a nice and deep information!
Thank you so much for sharing you’re knowledge with me (and the world).
Its late on my country right now, so tomorrow I will test all this new information you give me! I’ll update just to let you know your instructions gave some light to a lost worker 🙂
Thank you so much for your help, I appreciated so so much!
-
October 14, 2021 at 11:07 pm #520codingadvocateParticipant
Thanks G25. I’m happy to help.
-
October 15, 2021 at 8:58 am #521G25Participant
Hello codingadvocate!
That parameter you gave me works flawlessly, now the connections are made by FQDN (and now I’m dealing with some kerberos issues with the New-PSSesion).
I was wondering, how can I know if there are more parameters like that which might be helpful for tuning the jobs? Is it there any documentation or something where I can refer the available options or this I’m asking isn’t on the iNET and its private-related?
Thank you so much for your help!
-
October 15, 2021 at 12:13 pm #522G25Participant
Hello again! Sorry for bother with another question but I’m so so close to achieve my 2 goals!
Finally I could setup find_powershell job to get it done and resolve all of the Kerberos problems, but now I’m facing the same error with the IPAddress on New-PSSession on the “powershell_app_components_base” job on dynamicDiscovery. As the previous message, if I put the same “connectViaFQDN” option on the “inputParameters” on the job, I can see on the Statistics (or logs) section the PS calls are doing throw IPAddress instead of the FQDN, so the New-PSSesion get failed over and over again…
“inputParameters”: {
“connectViaFQDN”: true,
“printDebug”: false,
“showCommandParsing”: false
}Is there any other option or maybe I missing something?
Thank you so much for your support and your help!
Best regards!
-
October 15, 2021 at 3:42 pm #523codingadvocateParticipant
Most the documentation is provided through consulting services. I’d refer you to the consultation link of this website to pursue that route.
Regarding downstream usage of endpoint as FQDN:
I just took a look. It appears this is a bug from the previous migration, with going from using a global setting (OS Parameters config), down to a local job level. The endpoint (instead of just assuming IP) needs to be added to the protocol instance for this to continue to be used in downstream jobs, after the initial Find job.I find it odd that other companies haven’t hit this… they must not be using FQDN for PowerShell. Anyway, it’s not a hard fix, but will require a change and a test cycle.
-
October 16, 2021 at 3:42 pm #524G25Participant
Wow, so there’s a bug on that job…
Is there any global option (maybe on the OS Parameters config) where I can change the “default” option to execute this type of jobs and discoveries throw FQDN instead of IP? Like *always* do throw FQDN assuming on my environment things based on IP are not possible or not working as expected as the normal functioning of this awesome tool…
Thank you so much!! I will dig about the consulting services to see more about this if I get this tool working on that I want to see… Since this will be my “landscape” to the powerful engine this tool has…
-
October 17, 2021 at 9:43 pm #525codingadvocateParticipant
Yes, the software discovery in this tool is far more powerful than anything available with big-vendor proprietary ITSM discovery products. Not to mention it’s agent-less and doesn’t require a software database.
No, there isn’t a bug on the job; in fact it has nothing to do with the job. It’s on the protocol level. I called it a bug (actually a regression) because the functionality worked via global setting many versions ago, before it was switched over to local job parameters. The protocols are intended to be wrapped/overridden by customers, but I’ll leave that topic alone for now.
As I mentioned, it’s a small change. So I went ahead and did it for you, here: https://github.com/opencontentplatform/ocp/tree/protocol-connections-via-named-endpoints I didn’t get to test, but figured you can do that. To apply the patch, stop openContentPlatform, update the files in the last uploaded changeset (lib/protocolWrapper and database/schema/softwareElement)… in both your server and client dirs. And you’ll need to manually add the ‘endpoint’ column on the data/protocol table (a 256 VARCHAR that is nullable), unless you want to reinitialize the DB (which would blow away your current protocols). Restart openContentPlatform, and try again.
Whether you look into consulting services is up to you; I was simply responding to your previous request on documentation.
To my knowledge, the only free public-facing documentation is this website. You’re welcome to create some that you think would be helpful, and submit them as a PR.
-
October 18, 2021 at 9:24 am #527G25Participant
Hey codingadvocate!
I finally be able to setup the dinamicDiscovery throw FQDN and with the last update you done on Github now works flawlessly. This is amazing!! I chose the rebuild option of the platform since my OCP server was full of test and things I wanted to delete, by now is clean and showing the required information plus I completed my 2 initial goals 🙂
“Yes, the software discovery in this tool is far more powerful than anything available with big-vendor proprietary ITSM discovery products. Not to mention it’s agent-less and doesn’t require a software database.”
I’m totally agree with this, its such an amazing feature, on a few clicks and typing, you can build a complete database of software installed on your infrastructure, not only software plus services (or headless dependencies too).
How can I thank you the useful information and guidance you gave me?
Best regards!!!
-
October 18, 2021 at 7:36 pm #528codingadvocateParticipant
I’m glad it’s working for you now.
How can you thank? You’ve freely received… so freely give. Message it out via LinkedIn or other social platforms.
You can recommend the listing on SourceForge: https://sourceforge.net/software/product/Open-Content-Platform/
Or get involved and become part of the coding community. 🙂
-
November 4, 2021 at 12:43 am #530G25Participant
Hello again!
I was playing with the platform all this time and I finally manage to landscape the project to our company. And I was wondering if you know how can we hire the “services” and if you (personally you) would be interested on some “learning days”, just 2 or 3 days to teach us how to fully interact with the platform, maybe we can build screenplay or a “guide” just to know how many thing we’re interested to learn…
For example, I’m dealing with the “Metadata” and “Views” options, when I build a view (with all the process is shown on a video on the section “Modeling”) with some process, software or something to build and see the view, this doesn’t show on the screen, is just white… I can’t found any errors on logs or so, so I just don’t know why isn’t showing the diagram…
-
November 4, 2021 at 12:54 am #532codingadvocateParticipant
Certainly. I’d be happy to help with hours (or days or whatever you’re envisioning) of paid engagements – like training. Go to the Consulting page on the website, click on CMS Construct and use the Contact Us form. They take care of the paperwork. Tell them that you specifically want codingadvocate to help.
And for what it’s worth, your blank page on the Model View (assuming you built the model meta-data correctly) is likely due to not having the job run… in order to update the model. 😉 Check to see if the jobs are active; they are under the UniversalJob service – so ensure you have one of those OCP clients running.
-
November 4, 2021 at 1:18 pm #533G25Participant
Hello again!
I already setup the contact with my company email requesting some information about the pricing plans and all that.
Regarding my current problem with the views, I think you’re referring the UniversalJob (run on CMD and active on the AdminConsole) and then the section “logicalModels” with the 2 jobs “build_application_via_process” and “build_application_via_software”, both are active and running plus I change the Trigger Type to “inverval” every minute to see if this solves this issue but seems no change… All the jobs on every section under UniversalJob are active right now, but still no “information” or “map” showing on the view screen section…
For the metadata app, I follow the videoguide on CMS, building the same as shown with 2 different domains and 2 different locations (throw ip address) with the same regular expressions and only 1 filter on Discoverable Object, so “”I think”” this part its “OK”…
Maybe I missing something? Maybe I build something wrong?
-
November 5, 2021 at 12:56 am #534codingadvocateParticipant
Hey G25,
Unfortunately, nothing stands out. I’d have to jump into the details with you to figure out where the problem is.
-
November 25, 2021 at 5:07 pm #535codingadvocateParticipant
Hey G25,
You may have figured this out by now, but I just remembered this convo and may have the answer. You said you followed the video guide (walk through on modeling), but you didn’t mention what your “Discoverable Object” was. I think that’s the issue. I’ll explain:
The current jobs (build_application_via_processes and build_application_via_software) are configured to use a specific set of definitions. Take a look at (and compare/contrast) the job parameters. They call the same script to do the work, but the “targetQuery” and the “metaDataQuery” are specific to that job. One works of “ProcessFingerprint” and the other off of “SoftwareFingerprint”. So, if your “Discoverable Object” is a different class… you just need to create a new job just like the current two, with different parameters for targetQuery and metaDataQuery. And of course you need to create those two input queries.
Cheers.
-
November 26, 2021 at 9:27 am #536G25Participant
Hello codingadvocate!!
I was finally able to make it work (I was using the process and software as well) and the problem was I changed the “localhost” configuration of the “apiIpAddress” by the real IP on the globalSetting thinking this will make it reachable from the outside of the server, and that caused the API calls fail, so the jobs never updated the view. Since I see on the system the services/ports are up just listening on localhost, I was thinking if I change this configuration to the real IP, then the “apiIpAddress” will expose on the network and will be reachable from outside so I can use the adminconsole directly from my machine, but seems this its not the way…
Thank you for reach me out with your awesome ideas and tips, any word its just a light in the dark!
Regards!!
-
-
AuthorPosts
- You must be logged in to reply to this topic.